entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.04400v1
20230710080159
ARK: Robust Knockoffs Inference with Coupling
[ "Yingying Fan", "Lan Gao", "Jinchi Lv" ]
stat.ME
[ "stat.ME", "math.ST", "stat.ML", "stat.TH" ]
ARK: Robust Knockoffs Inference with Coupling Yingying Fan is Centennial Chair in Business Administration and Professor, Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Los Angeles, CA 90089 (E-mail: [email protected]). Lan Gao is Assistant Professor, Business Analytics and Statistics Department, Haslam College of Business, University of Tennessee, Knoxville, TN 37996 (E-mail: [email protected]). Jinchi Lv is Kenneth King Stonier Chair in Business Administration and Professor, Data Sciences and Operations Department, Marshall School of Business, University of Southern California, Los Angeles, CA 90089 (E-mail: [email protected]). This work was partially supported by NIH R01 Grant 1R01GM131407-01 and NSF Grants DMS-1953356 and EF-2125142. Yingying Fan^1, Lan Gao^2 and Jinchi Lv^1 University of Southern California^1 and University of Tennessee^2 July 8, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We investigate the robustness of the model-X knockoffs framework with respect to the misspecified or estimated feature distribution. We achieve such a goal by theoretically studying the feature selection performance of a practically implemented knockoffs algorithm, which we name as the approximate knockoffs (ARK) procedure, under the measures of the false discovery rate (FDR) and family wise error rate (FWER). The approximate knockoffs procedure differs from the model-X knockoffs procedure only in that the former uses the misspecified or estimated feature distribution. A key technique in our theoretical analyses is to couple the approximate knockoffs procedure with the model-X knockoffs procedure so that random variables in these two procedures can be close in realizations. We prove that if such coupled model-X knockoffs procedure exists, the approximate knockoffs procedure can achieve the asymptotic FDR or FWER control at the target level. We showcase three specific constructions of such coupled model-X knockoff variables, verifying their existence and justifying the robustness of the model-X knockoffs framework. Running title: ARK Key words: Knockoffs inference; High dimensionality; Feature selection; False discovery rate control; Family-wise error rate control; Coupling; Robustness § INTRODUCTION The knockoffs inference framework <cit.> is a powerful innovative tool for feature selection with controlled error rates. In particular, the model-X knockoffs <cit.> achieves the false discovery rate (FDR) control at a predetermined level in finite samples without requiring any specific model assumptions on how the response depends on the features, making it an attractive option for feature selection in a wide range of statistical applications. The fundamental idea of the knockoffs procedure is to construct knockoff variables that are exchangeable in distribution with the original features but are independent of the response conditional on the original variables. These knockoff variables serve as a control group for the original features, allowing researchers to identify relevant original features for the response. The model-X knockoffs inference has gained increasing popularity since its inception and there have been flourishing developments and extensions of the knockoffs framework and spirits, such as the k-familywise error rate (k-FWER) control with knockoffs <cit.>, power analysis for knockoffs procedure <cit.>, derandomized knockoffs <cit.>, knockoffs inference for time series data <cit.>, kernel knockoffs procedure <cit.>, and FDR control by data splitting or creating mirror variables <cit.>. A key assumption in the model-X knockoffs inference is that the joint distribution of features is known. However, such information is almost never available in practice. There has been overwhleming empirical evidence that the model-X knockoffs framework is robust to misspecified or estimated feature distributions <cit.>. Yet, the theoretical characterization of its robustness is still largely missing. A notable exception is the recent work of <cit.>, where it was formally and elegantly shown that the knockoffs data matrix collecting the knockoff variables can be generated from a distribution, which we name as the working distribution for the ease of presentation, that is different from the true underlying feature distribution, and that the resulting FDR inflation can be measured by the empirical Kullback–Leibler (KL) divergence between the true conditional distribution X_j | X_-j and the working conditional distribution. Here, X_j∈ℝ stands for the jth feature, X_-j∈ℝ^p-1 stands for the feature vector with the jth feature removed, and p is the feature dimensionality. Two important assumptions in their analyses for ensuring the asymptotic FDR control are 1) the working distribution should be learned independently from the training data used for feature selection and 2) the empirical KL divergence between the two knockoffs data matrices (of diverging dimensionalities) generated from the working and true distributions, respectively, needs to vanish as the sample size increases. Although their results are general and apply to arbitrary dependence structure of the response on features, these two assumptions do not always describe the practical implementation. Our results in the current paper are free of the two assumptions discussed above. To put more content into our statements above, especially the one about assumption 2), let us consider the scenario when the true feature matrix has independent and identically distributed (i.i.d.) entries from the t-distribution with ν degrees of freedom, but we misspecify it and use the Gaussian distribution as a working distribution to generate the knockoff variable matrix ∈ℝ^n× p, where n is the sample size. It can be calculated that the empirical KL divergence between and the model-X knockoff variable matrix ∈ℝ^n× p defined in <cit.> has mean and variance both at order np/ν (ν + p). Thus, only when ν^2 ≫ n min (n, p) (which is equivalent to np/ν (ν + p)→ 0), the FDR inflation as derived in <cit.> can vanish asymptotically. In contrast, our theory shows that as long as ν^2≫ s^4 (log p)^4 + 4/γ for some γ∈ (0, 1) with s≪ n^1/2 a sparsity parameter, the knockoffs procedure based on the working distribution can achieve the asymptotic FDR control. More details for our results and model assumptions are summarized formally in Section <ref>. We provide additional comparisons of our results with those of <cit.> in various parts of the paper where more specifics can be discussed. We emphasize and acknowledge that <cit.> established general robustness results without specific model assumptions, while some of our results rely on certain specific model assumptions. The main point we advocate here is that a different notion of closeness than the KL divergence can be advantageous in studying the robustness of the model-X knockoffs. The major goal of our paper is to establish a general theory on the robustness of the model-X knockoffs framework for the FDR and FWER control. We approach the problem by studying the performance of the approximate knockoffs (ARK) procedure, an algorithm that is most popularly implemented in practice when applying the knockoffs framework. The approximate knockoffs procedure differs from the model-X knockoffs in that the former generates the knockoff feature matrix from a working distribution that can be misspecified or learned from the same training data for feature selection. By showing that the approximate knockoffs inference procedure achieves the asymptotic FDR and FWER control as sample size increases, we can verify the robustness of the model-X knockoffs. An important idea in our technical analyses is coupling, where we pair the approximate knockoffs procedure with the model-X knockoffs procedure in such a way that random variables in these two paired procedures are close in realizations with high probability. Hereafter, we will refer to the model-X knockoffs as the perfect knockoffs procedure to emphasize its difference from the approximate knockoffs procedure. It is important to emphasize that we require the realizations of random variables in the paired procedures to be close, instead of the corresponding distributions being close. This is a major distinction from the assumption in <cit.>. Our new notion of closeness allows us to justify the robustness of the model-X knockoffs in some broader contexts not covered by studies in the existing literature. We also emphasize that although our conditions are imposed on the perfect knockoff variables, we do not need to know or construct them in implementation; the existence of such variables is sufficient for our theoretical robustness analyses. We present our theory by first stating the general conditions on the existence of the coupled perfect knockoff statistics and their closeness to the approximate knockoff statistics in Section <ref>, and then provide examples justifying these general conditions in Sections <ref> and <ref>. More specifically, our theory has three layers, related to different stages in applying the knockoffs inference procedure. Our general theory presented in Section <ref> directly makes assumptions on the quality of the approximate knockoff statistics (cf. (<ref>)) by requiring the existence of the perfect knockoff statistics that are sufficiently close to the approximate knockoff statistics. Then under some regularity conditions imposed on the distribution of these perfect knockoff statistics, we prove that the FDR and FWER are controlled asymptotically using the approximate knockoff statistics. This lays the theoretical foundation for our subsequent analyses that are developed by verifying these general conditions in various more specific scenarios. The second layer of our theory, presented in Section <ref>, delves deeper and replaces the coupling condition imposed on the knockoff statistics in Section <ref> with a coupling condition on the approximate knockoff variables generated from some misspecified or estimated feature distribution. Similar in nature to the coupling condition in our general theory, this new condition assumes that there exist perfect knockoff variables that can be coupled with approximate knockoff variables so that their realizations are close to each other with high probability. Since knockoff statistics are known functions of knockoff variables, such alternative condition intuitively and naturally leads to the verification of the coupling condition on knockoff statistics in our general theory. Indeed, we showcase using two commonly used knockoff statistics, namely the marginal correlation statistics and the regression coefficient difference statistics, that the coupling condition on knockoff variables can guarantee the coupling condition on knockoff statistics. We also verify that for each of these two constructions of knockoff statistics, the other regularity conditions in our general theory also hold, ensuring the asymptotic FDR and FWER control. The last layer of our theory is presented in Section <ref> and showcases three specific constructions of the coupled perfect knockoff variables. By imposing conditions on the misspecified or estimated feature distribution, we construct explicitly the coupled perfect knockoff variables and prove that the coupling conditions in the first and second layers of our general theory are satisfied. This gives us a complete theory with conditions imposed on the working distribution for generating knockoff variables and verifies the robustness of the model-X knockoffs inference procedure. Our theory allows high dimensionality of features and does not require an independent learning data set for estimating the feature distribution. There exist some other less related works in the literature that contribute to relaxing the assumption of fully known feature distribution in the model-X knockoffs framework. For instance, <cit.> relaxed such assumption via assuming the existence of sufficient statistic for the model and proposing an alternative conditional exchangeablity for knockoffs given the sufficient statistic. <cit.> investigated the robustness of knockoffs inference with estimated feature distribution in terms of the FDR control in the linear model setting where the features follow a latent factor model with parametric idiosyncratic noise. <cit.> provided theoretical guarantee of the asymptotic FDR control for the approximate knockoffs procedure under an assumption that the FDR function is Lipschitz with respect to feature covariance matrix when the features have the joint Gaussian distribution. The rest of the paper is organized as follows. Section <ref> first introduces the approximate knockoffs procedure and then presents the general conditions and theory for the asymptotic FDR and k-FWER control. We also introduce the coupling idea, a key technique in our theoretical analyses. We illustrate our general theory using two commonly used constructions of knockoff statistics in Section <ref>. Section <ref> further provides three specific constructions of the coupled perfect knockoff variables. We conclude our paper by summarizing the key results and discussing some future research directions in Section <ref>. All the proofs and technical details are provided in the Supplementary Material. To facilitate the technical presentation, let us introduce some notation that will be used throughout the paper. We use a_n ≪ b_n or a_n = o(b_n) to represent a_n / b_n → 0, a_n ≫ b_n to represent a_n / b_n →∞, and a_n ≲ b_n or a_n = O(b_n) to represent a_n ≤ C b_n for an absolute constant C > 0. Let a b and a b be the minimal and maximal values of a and b, respectively. For a vector ∈ℝ^p, denote by _1, _2, and _0 the ℓ_1-norm, ℓ_2-norm, and ℓ_0-norm, respectively. For 1 ≤ j ≤ p, _j is the jth component of and _-j is a subvector of with the jth component removed. For a matrix ∈ℝ^n × p, denote by _i, j the (i, j)th entry of , _j the jth column of , and _A_1, A_2 a submatrix of consisting of (_i, j)_i ∈ A_1, j ∈ A_2 for sets A_1 ⊂{1, …, n } and A_2 ⊂{1, …, p }. Let _max and _2 be the maximum norm and spectral norm of a matrix , respectively. For 1 ≤ j ≤ p, -j represents the set {1, …, p }∖{j}, and denote by |𝒜| the cardinality of set 𝒜. For a positive definite matrix , let λ_min() and λ_max() be the smallest and largest eigenvalues of , respectively. § GENERAL THEORY ON ROBUST KNOCKOFFS INFERENCE WITH COUPLING §.§ Model setup and model-X knockoffs framework Assume that we have n i.i.d. observations { (_i, y_i)}_i = 1^n from the population (X, Y), where X = (X_1, …, X_p)^⊤ is the p-dimensional feature vector and Y ∈ℝ is a scalar response. Here, the feature dimensionality p can diverge with the sample size n. Adopting the matrix notation, the n i.i.d. observations can be written as the data matrix = (_i, j ) ∈ℝ^n × p collecting the values of all the features and vector = (y_1,⋯, y_n)^⊤∈ℝ^n collecting the values of the response. A feature X_j is defined as null (or irrelevant) if and only if it is independent of the response conditional on all the remaining features; that is, Y X_j | X_-j, where X_-j is a subvector of X with the jth component removed. Denote by ℋ_0 = {1 ≤ j ≤ p: X_j  } the set of null features and ℋ_1 = ℋ_0^c that of nonnull (or relevant) features. To ensure the model identifiability and interpretability, we follow <cit.> and assume that ℋ_1 exists and is unique. Further assume that the subset of relevant features is sparse such that p_1 = | ℋ_1 | = o(n p), where |𝒜| stands for the cardinality of a given set. The goal is to select as many relevant features as possible while controlling some error rate measure at the prespecified target level. Two commonly used measures for evaluating the feature selection performance are the FDR <cit.> and k-FWER <cit.>, where the FDR is defined as the expectation of the fraction of false discoveries among all the discoveries and the k-FWER is defined as the probability of making k or more false discoveries. Specifically, for an outcome Ŝ of some feature selection procedure, the FDR and k-FWER are defined as = [] with = | Ŝ∩ℋ_0 | / | Ŝ | k- = ℙ ( |Ŝ∩ℋ_0| ≥ k ), respectively. The model-X knockoffs framework provides a flexible way for controlling the FDR at some prespecified target level in finite samples <cit.>, allowing arbitrary dimensionality of X and arbitrary dependence between response Y and feature vector X. The knockoffs method has also been explored in the context of the k-FWER control by <cit.>. A key step of the model-X knockoffs inference <cit.> is to generate the model-X knockoff variables X = (X_1, …, X_p)^⊤ such that X Y | X and (X, X)_(S)d= (X, X) S ⊂{1, …, p}, where (X, X)_(S) is obtained by swapping the components X_j and X_j in (X, X) for each j ∈ S. The construction of the model-X knockoff variables, which we will refer to as the perfect knockoff variables in the future presentation, requires the exact knowledge of the distribution of feature vector X. For example, Algorithm 1 in <cit.> provided a general approach to generating the perfect knockoff variables when such information is available. However, the exact knowledge of feature distribution is usually unavailable in real applications. Thus, in practical implementation, the problem becomes identifying the relevant subset ℋ_1 with the approximate knockoff variables generated from a feature distribution that can be different from the true underlying one. As stated in the Introduction, we study the robustness of the model-X knockoffs procedure by investigating the feature selection performance of its practical implementation, which we name as the approximate knockoffs procedure and formally present it in the next section for completeness. §.§ Feature selection with approximate knockoffs In practice, the approximate knockoffs inference procedure below is implemented popularly for controlling the FDR or k-FWER. 1) Generating approximate knockoff variables. Since the true underlying feature distribution F(·) is generally unavailable, we generate the knockoff variables from some user-specified feature distribution F̂(·), which may depend on the sample (, ), using the same algorithm proposed for generating the perfect knockoff variables (e.g., Algorithm 1 in <cit.>). Denote by = (_i, j) ∈ℝ^n× p the resulting approximate knockoffs data matrix. 2) Constructing approximate knockoff statistics. Pretend that were perfect knockoffs data matrix and follow the same procedure as in <cit.> to calculate the knockoff statistics Ŵ_j with j=1,⋯, p. Specifically, we first compute the feature importance statistics (Z_1, …, Z_p, Ẑ_1, …, Ẑ_p)^⊤ = t((, ), ), where t(·) is a measurable function of input ((, ), ), and Z_j and Ẑ_j measure the importance of the jth feature and its approximate knockoff counterpart relative to the response, respectively. Then the approximate knockoff statistic Ŵ_j for the jth feature is defined as Ŵ_j = f_j(Z_j, Ẑ_j), where f_j (·, ·) is an antisymmetric function satisfying f_j(x, y) = - f_j (y, x). See <cit.> for examples and characterizations on the valid construction of knockoff statistics. 3) Selecting relevant features. Calculate a data-driven threshold T for the knockoff statistics {Ŵ_j}_j=1^p and select the set of important features as Ŝ = {1≤ j ≤ p: Ŵ_j ≥T}. The thresholds for the FDR control and k-FWER control are different. Specifically, denoting 𝒲̂ = {|Ŵ_1|, …, |Ŵ_p|}, the thresholds for the FDR and k-FWER control are defined as T = min{t ∈𝒲̂: #{j: Ŵ_j ≤ -t}/#{Ŵ_j ≥ t} 1 ≤ q} and T_v = sup{t ∈𝒲̂: #{j: - Ŵ_j ≥ t} = v } with v the largest integer such that ∑_i = k^∞ 2^-(i+v)i+v-1 i≤ q, respectively, where q∈ (0,1) is the prespecified level for the FDR or k-FWER. It is seen that the only difference of the algorithm above from the perfect knockoffs procedure is how the knockoffs data matrix is generated. The perfect knockoffs procedure based on the true underlying feature distribution F(·) has been shown to control the FDR or k-FWER at the target level <cit.>. For the approximate knockoffs inference, however, it is reasonable to expect some inflation in the FDR and k-FWER control, and the inflation level depends on the qualities of the approximate knockoff variable matrix and the resulting knockoff statistics {Ŵ_j}_j=1^p. A desired property is that as the approximate knockoff statistics “approach" the perfect knockoff statistics, the level of inflation also vanishes. One contribution of our paper is to formally introduce a notion of closeness measuring the qualities of the approximate knockoff statistics {Ŵ_j}_j=1^p and knockoff variable matrix . As stated in the Introduction, our technical analyses have three layers, corresponding reversely to the different steps of the approximate knockoffs inference procedure described above. To put it into more content, note that the set of selected features Ŝ is defined directly as a function of the approximate knockoff statistics {Ŵ_j}_j=1^p. Hence, given {Ŵ_j}_j=1^p feature selection can be conducted without the knowledge of the approximate knockoff variable matrix or the feature distribution F(·). For this reason, the first layer of our analysis concerns the quality of {Ŵ_j}_j=1^p and characterizes what kind of approximate knockoff statistics can yield the asymptotic FDR or k-FWER control. The second layer of our analysis studies the quality of and is built on the first layer. We characterize what kind of approximate knockoff variable matrix can lead to valid knockoff statistics {Ŵ_j}_j=1^p in the sense of achieving the asymptotic FDR or k-FWER control. The third layer of our analysis goes all the way to the root of the knockoffs inference and provides specific examples and conditions on F̂(·) for ensuring the asymptotic FDR or k-FWER control. The key idea empowering our analyses is variable coupling behind the approximate knockoffs (ARK) procedure, which we formally introduce in the next section. §.§ Robust knockoffs inference with coupling An important observation is that the perfect knockoff variables in the model-X knockoffs framework <cit.> are not unique. Consequently, the knockoff statistics are not unique either. Indeed, even with the same algorithm (e.g., Algorithm 1 in <cit.>), the knockoff variables generated from different runs of the algorithm are only identically distributed. Our coupling idea is deeply rooted on such observation. Let us introduce some additional notation to facilitate our formal presentation of the general theory. Following the model-X knockoffs framework, for a realization of the perfect knockoff variable matrix generated from the true feature distribution F(·), we let (Z_1^*,…, Z_p^*, Z_1, …, Z_p)^⊤ = t((, ), ) and define the perfect knockoff statistics W_j = f_j (Z_j^*, Z_j) for 1 ≤ j≤ p, where functions t(·) and f_j(·) are chosen to be the same as in the approximate knockoffs inference procedure. We next establish a general theory on the asymptotic FDR control and k-FWER control for the approximate knockoffs inference procedure with regularity conditions imposed on Ŵ_j's. [Coupling accuracy] There exist perfect knockoff statistics {W_j}_j = 1^p such that for some sequence b_n → 0, ℙ( max_1 ≤ j ≤ p | Ŵ_j - W_j | ≥ b_n ) → 0. Conditions on the convergence rate b_n for ensuring the asymptotic FDR or k-FWER control will be specified in the subsequent assumptions. Condition <ref> above couples each realization of the approximate knockoff statistics {Ŵ_j}_j=1^p with a realization of the perfect knockoff statistics {W_j}_j=1^p, and they need to be sufficiently close to each other with high probability. Note that the existence of such {W_j}_j=1^p is required only for the theory, whereas the implementation uses only {Ŵ_j}_j=1^p. We will provide examples in later sections verifying the existence of such coupled {W_j}_j=1^p. The two conditions below are on the quality of the ideal knockoff statistics {W_j}_j=1^p and the signal strength in the data as measured by W_j's. [Average concentration of W_j] There exist deterministic quantities { w_j }_j=1^p such that p^-1∑_j = 1^pℙ ( | W_j - w_j | ≥δ_n ) = o(p^-1), where δ_n → 0 is a sequence satisfying δ_n ≥ b_n. [Signal strength] Let 𝒜_n = {j ∈ℋ_1: w_j ≥ 5 δ_n }. It holds that a_n = | 𝒜_n | →∞ and w_j > - δ_n for j ∈𝒜_n^c. As discussed in <cit.> and <cit.>, a desired property of the knockoff statistics is to have a large and positive value for W_j if j∈ℋ_1, and a small and symmetric around zero value for W_j if j∈ℋ_0. Conditions <ref> and <ref> above together formalize this property in the average probability sense. Note that there is no requirement that each individual w_j with j∈ℋ_1 is positive and large; we only need that there exist enough number (i.e., a_n) of w_j's with j∈ℋ_1 that are positive and large enough. Implicitly, a_n→∞ requires that the number of relevant features |ℋ_1| diverges with sample size as well. Condition <ref> requires that each perfect knockoff statistic W_j is concentrated around its corresponding w_j with rate δ_n in an average probability sense. Let us define p_0 = |ℋ_0| and G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≥ t). By <cit.>, the perfect knockoff statistics W_j with j∈ℋ_0 are symmetrically distributed around zero. It follows that G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≤ -t). We need to impose the technical conditions below on the distribution of the perfect knockoff statistics for our robustness analysis. [Weak dependence] For some constants 0 < γ < 1, 0< c_1 < 1, C_1 > 0, and a positive sequence m_n = o(a_n), it holds that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ C_1 m_n p_0 G(t) + o ( ( log p )^ - 1/γ [p_0 G (t)]^2 ) uniformly over t ∈ (0, G^-1 ( c_1 q a_n / p ) ]. [Distribution of W_j] Assume that G(t) is a continuous function. For the same constants γ and c_1 as in Condition <ref>, it holds that as n →∞, (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - b_n ) - G(t + b_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) → 0. Condition <ref> above can be easily satisfied if W_j's with j∈ℋ_0 are independent of each other. At the presence of dependence, it imposes an assumption on the strength of correlation among the indicator functions 1(W_j>t) with j∈ℋ_0. The ratio G(t - b_n ) - G(t + b_n) / G(t) in Condition <ref> above is closely related to the hazard rate function in survival analysis if G(t) has a probability density function. Loosely speaking, assumption (<ref>) requires that the hazard rate function should have enough smoothness and be more or less bounded uniformly over the range t ∈ (0, G^-1 ( c_1 q a_n / p ) ]; it plays an essential role in determining the coupling accuracy b_n. Assumption (<ref>) allows a small fraction of W_j's for important features to take negative values with nonvanishing probabilities. We are now ready to present our first general theorem on the FDR control for the approximate knockoffs inference procedure. Under Conditions <ref>–<ref>, we have lim sup_n →∞≤ q. The main idea of the proof is to show that ∑_j ∈ℋ_01 (Ŵ_j ≥ t) ≈∑_j ∈ℋ_01 (W_j ≥ t) ≈∑_j∈ℋ_0ℙ (W_j ≥ t) and similarly ∑_j ∈ℋ_01 (Ŵ_j ≤ - t) ≈∑_j∈ℋ_0ℙ (W_j ≤ -t) uniformly over 0 < t ≤ G^-1 (c_1 q a_n/p) with asymptotic probability one, under Conditions <ref>, <ref>, and <ref>. In addition, we will show that threshold T falls into the range (0, G^-1 (c_1 q a_n/p)] with asymptotic probability one under Conditions <ref>–<ref>. Thus, ∑_j ∈ℋ_01 (Ŵ_j ≥ T) ≈∑_j ∈ℋ_01 (Ŵ_j ≤ - T) with asymptotic probability one by the symmetry of {W_j}_j ∈ℋ_0. Consequently, the FDR of the approximate knockoffs procedure is asymptotically the same as that of the perfect knockoffs procedure, where the latter has been proved to be controlled at the target level. This ensures that the FDR of the approximate knockoffs procedure can be controlled asymptotically. Next we will establish the companion result on the k-FWER control for the approximate knockoffs inference procedure. Recall that the selected subset is Ŝ_2 := {1 ≤ j ≤ p: Ŵ_j ≥ T_v} with threshold T_v given in (<ref>). Denote by V̂ = | Ŝ_2 ∩ℋ_0| the number of false discoveries. Similar to the FDR analysis, we assume that |ℋ_1|→∞ as n→∞. Further, we consider the scenario when k diverges very slowly with n as well. Our theorem will again be built on the key Condition <ref> that there exist coupled perfect knockoff statistics that are sufficiently close to the approximate knockoff statistics. However, different from the FDR study when Conditions <ref>–<ref> are needed, we assume instead the two technical conditions below for the distribution of the perfect knockoff statistics, which can be interpreted intuitively similar to Conditions <ref>–<ref>. [Weak dependence] For constants 0 < γ < 1 and C_2 > 0, and a positive sequence m_n = o(k), it holds that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ C_2 m_n p_0 G(t) + o( (log k )^ - 1/γ [p_0 G (t)]^2 ) uniformly over t ∈ (G^-1 ( 3 k / 2 p ), G^-1 ( k / 2 p ) ). Assume that G(t) is a continuous function. It holds that as n →∞, sup_t ∈( G^-1 (3k/2p), G^-1 (k/2p)) G (t - b_n) - G(t + b_n)/G(t)→ 0 and k^-1∑_j ∈ℋ_1ℙ(W_j < - G^-1 (3 k /2 p) ) → 0 Now we are ready to present our general theorem on the k-FWER control for the approximate knockoffs procedure. Assume that Conditions <ref>, <ref>, and <ref> are satisfied, k →∞, and m_n / k → 0 as n→∞. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. <cit.> showed that the perfect knockoffs inference procedure for the k-FWER control provides precise finite-sample control on the k-FWER. The main idea for proving Theorem <ref> is to compare the approximate knockoff statistics with their coupled perfect counterparts and show that the approximate threshold T_v satisfies |T_v - T_v| ≤ b_n as long as max_1 ≤ j ≤ p |Ŵ_j - W_j| ≤ b_n, where T_v is the corresponding threshold from the perfect knockoff statistics. Moreover, we can show that for each > 0, with high probability it holds that T_v + M_v + 1 < T_v - 2b_n ≤T_v + M_v for M_v ≤ v. Therefore, the probability of the approximate knockoffs inference procedure making at least k false discoveries can be related to that of the FWER control with the perfect knockoff statistics, which establishes the desired result in Theorem <ref>. § ILLUSTRATION OF THE GENERAL THEORY §.§ Characterization of approximate knockoff variables We have established in Section <ref> a general theory on the asymptotic control of the FDR and k-FWER for the approximate knockoffs inference. The key assumption for ensuring the asymptotic FDR and k-FWER control is Condition <ref>. Since the knockoff statistics are intermediate results calculated from the knockoff variables, it is important to provide a characterization on the quality of the approximate knockoff variable matrix that can guarantee Condition <ref>. The assumption below is imposed for such a purpose. For constructed from the approximate knockoffs inference procedure, there exists a perfect knockoff data matrix and an asymptotically vanishing sequence Δ_n such that ℙ(max_1 ≤ j ≤ p n^-1/2_j - _j _2 ≥Δ_n) → 0, where _j and _j are the jth columns of the approximate and perfect knockoff data matrices and , respectively. Condition <ref> above couples each approximate knockoff variable _j with a perfect knockoff variable _j. Similar to Condition <ref>, we need the realizations instead of the distributions of _j and _j to be close, which is a major distinction from the assumption in <cit.>. We next show that the closeness between and can lead to the closeness between Ŵ_j's and W_j's as required by Condition <ref>. Since different construction of the knockoff statistics depends on the feature matrix differently, we showcase the theory using two popularly used constructions of the knockoff statistics: the marginal correlation knockoff statistics and the regression coefficient difference (RCD) knockoff statistics. §.§ Marginal correlation knockoff statistics The marginal correlation is a commonly used measure on variable importance for feature screening due to its simplicity. Given an approximate knockoff variable matrix and its coupled perfect counterpart satisfying Condition <ref>, the approximate knockoff statistics based on the marginal correlation difference are defined as Ŵ_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) for 1 ≤ j ≤ p, and the coupled perfect knockoff statistics are given by W_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) for 1 ≤ j ≤ p. Observe that W_j - Ŵ_j = ( √(n)_2 )^-1 ( | _j^⊤ | - | _j^⊤ | ) and thus under Condition <ref>, we have that with asymptotic probability one, max_1≤ j≤ p|W_j-W_j|≤Δ_n. This result is summarized formally in Lemma <ref> in Section <ref> of the Supplementary Material. We consider the flexible nonparametric regression model Y = f(X_ℋ_1) + ε, where f is some unknown regression function, X_ℋ_1 = (X_j)_j ∈ℋ_1 contains all the relevant features for response Y, and ε is the model error satisfying that ε X and (ε) = 0. Assume that feature vector X = (X_1, …, X_p)^⊤d∼ N(0, ) with the positive definite covariance matrix. Moreover, let the distribution of the perfect knockoff variables X = (X_1, ⋯, X_p)^⊤ satisfy that (X, X) = (X_1, ⋯, X_p, X_1, ⋯, X_p) d∼ N (0, [ - r I_p; - r I_p ; ]), where r > 0 is a constant such that the above covariance matrix is positive definite. Here, we consider the equicorrelated construction for simpler presentation and the diagonal matrix r I_p can be replaced with a general version (r_1, …, r_p) with possibly distinct diagonal entries { r_j}_j = 1^p. The above construction of the perfect knockoff variables has been discussed in <cit.>. Note that the Gaussian distribution assumption is imposed mainly to verify the general Conditions <ref> and <ref>. If one assumes directly these two conditions, the Gaussian distribution assumption can be removed. Furthermore, we make the additional technical assumptions below on the generative model (<ref>) to verify the conditions in our general theory presented in Section <ref>. Y is a sub-Gaussian random variable with sub-Gaussian norm Y _ψ_2. Define 𝒜_n = { j ∈ℋ_1:( Y^2)^ - 1/2 (| (X_j Y) | - | ( X_j Y) |)| ≥ 5 δ_n } with δ_n = C_X,Y√(log p/n), where C_X,Y:=max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }. It holds that a_n:= |𝒜_n| →∞ and C_X,Y is a positive constant that is independent of p and n. Denote by (^-1)_j the jth column of matrix ^-1, _i, j the (i, j)th entry of matrix , and _ℋ_1, j a vector given by (_i, j)_i ∈ℋ_1. Recall the definition G(t) = p_0^-1∑_j∈ℋ_0ℙ (W_j ≥ t). Matrices ^-1 and are sparse in the sense that max_1≤ j ≤ p (^-1)_j_0 ≤ m_n and ∑_j ∈ℋ_01 ( _ℋ_1, j≠ 0 ) ≤ m_n. In addition, C_1 < r < min_1 ≤ j ≤ p_j, j≤max_1 ≤ j ≤ p_j, j < C_2 for some constants C_1> 0 and C_2 > 0. It holds that |ℋ_1|^-1∑_j ∈ℋ_1ℙ (W_j < - t ) ≤ G(t) for all t ∈ (0, C_3√(n^-1log p)) with C_3>0 some large constant. Under Conditions <ref>–<ref>, we can verify that Conditions <ref>–<ref> are satisfied. This together with Condition <ref> and our general theorem on the FDR control (cf. Theorem <ref>) leads to the theorem below. Assume that Conditions <ref>–<ref> are satisfied. In addition, assume that for some constant 0 < γ < 1, (log p)^1/γ m_n / a_n → 0 and the coupling accuracy Δ_n in Condition <ref> satisfies √(n)Δ_n (log p)^1/2 + 1/γ→ 0. Then for the approximate knockoffs inference based on the marginal correlation, we have lim sup_n →∞≤ q. Let us make a few remarks on the conditions and result presented in Theorem <ref> above. Condition <ref> verifies the signal strength assumption in Condition <ref> in the specific context of model (<ref>) and marginal correlation knockoff statistics. We show in Lemma <ref> in Section <ref> of the Supplementary Material that Condition <ref> holds with δ_n = O(√(n^-1log p )). Since we assume Gaussian feature distribution in this section, the dependence among the indicator functions as required by Condition <ref> is determined by covariance matrix . Hence, Condition <ref> is imposed to justify the validity of Condition <ref>. It is worth mentioning that the sparse dependence structure assumed in Condition <ref> can be replaced with a general assumption that the conditional distribution X_ℋ_0 | X_ℋ_1 has sparse pairwise dependency and the sequence { h_j(t; X_ℋ_1): = ( 1 (W_j ≥ t) | X_ℋ_1 ) }_j ∈ℋ_0 has sparse pairwise correlation for each given t > 0. Condition <ref> is a technical assumption that is intuitive and requires that on average, the probability of a relevant feature having a negative valued W_j is smaller than the corresponding probability of an irrelevant feature. Such condition is compatible with our requirement that relevant features should have positive and larger magnitude for W_j. The convergence rate assumption √(n)Δ_n (log p)^1/2 + 1/γ→ 0 in Theorem <ref> indicates that Δ_n ≪δ_n ∼√(n^-1log p), where δ_n is the concentration rate of individual W_j to w_j. In view of (<ref>), the requirement of Δ_n≪δ_n indeed constrains that the quality of the approximate knockoff statistics, as measured by Δ_n, is of an order smaller than the concentration rate δ_n; this is a general condition we need and not unique to the marginal correlation knockoff statistics. It is worth mentioning that the bound obtained in (<ref>) may be improved under more specific model assumptions. For instance, if (X̂_j, X_j) Y for j ∈ℋ_0 and Y is a sub-Gaussian random variable with (Y) = 0, then under Condition <ref> we can show that max_1 ≤ j ≤ p | Ŵ_j - W_j| ≤ C Δ_n √(n^-1log p). Later in Section <ref>, we will provide extensive analysis on the coupling order Δ_n using some specific examples of feature distributions. Next we will present the parallel result on the k-FWER control. Assume that Conditions <ref>, <ref>, and <ref> are satisfied, k →∞, m_n / k → 0, and Δ_n √(n log p)→ 0. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. The interpretations of the assumptions in the context of the k-FWER control are similar and thus omitted here for simplicity. §.§ Regression coefficient difference with debiased Lasso Another popularly used construction of the knockoff statistics is the regression coefficient difference (RCD). Let us consider the linear regression model = + , where = (β_j)_1 ≤ j ≤ p∈ℝ^p is the true regression coefficient vector, d∼ N(0, σ^2 I_n) is the model error vector, and . Assume that feature vector X = (X_1, …, X_p)^⊤ has mean 0_p ∈ℝ^p and covariance matrix ∈ℝ^p × p. Denote by ^ = (^⊤ , 0_p^⊤)^⊤∈ℝ^2p the augmented true parameter vector. Let =(β̂_j)_1 ≤ j ≤ 2p∈ℝ^2p be the debiased Lasso estimator (<cit.>) based on the augmented design matrix ^ := [, ], where is the approximate knockoff variable matrix. Assume that Condition <ref> is satisfied and is the coupled perfect knockoffs variable matrix. Similarly, define ^ := [, ]. Then can be coupled with the debiased Lasso estimator denoted as =(β_j)_1 ≤ j ≤ 2p∈ℝ^2p based on ^. Then the regression coefficient difference knockoff statistics can be defined as Ŵ_j = |β̂_j| - |β̂_j+p| and W_j = |β_j| - |β_j+p| for the approximate and perfect knockoffs procedures, respectively, for 1 ≤ j ≤ p. We provide the explicit definition of the debiased Lasso estimator to assist future presentation. For 1≤ j ≤ 2p, the debiased Lasso estimator is a one-step bias correction from some initial estimator ^=(β̂_j^)_1 ≤ j ≤ 2p∈ℝ^2p and is defined as β̂_j = β̂^_j + _j^⊤( - ^^) /_j^⊤^_j, where _j is the score vector defined as _j = ^_j - _-j_j with _j := _b{^_j - ^_-jb_2^2 /2n + λ_j b_1 } and {λ_j}_j = 1^2p the nonnegative regularization parameters. We construct the initial estimator as ^ := _b{ - ^b_2^2 /2n + λb_1 } with λ = C √(n^-1log (2p)) the regularization parameter and C>0 some constant. Analogously, the coupled debiased Lasso estimator can be defined componentwisely as β_j = β^_j + _j^⊤( - ^^) /_j^⊤^_j 1 ≤ j ≤ 2p, where ^ = (β_j^)_1 ≤ j ≤ 2p := _b{ - ^b_2^2 /2n + λb_1 } and _j = ^_j - ^_-j_j _j := _b{^_j - ^_-jb_2^2 /2n + λ_j b_1 }. It is important to emphasize that the same regularization parameters λ and λ_j's in defining should be used as in defining in (<ref>) so that their constructions differ only by the used feature matrix; this plays a key role in applying our coupling technique. Indeed, we prove in Lemma <ref> in Section <ref> of the Supplementary Material that the coupling technique together with Condition <ref> and some other regularity conditions ensures that with asymptotic probability one, max_1≤ j≤ 2p|β_j-β̂_j|≲Δ_ns√((log p)/n). The above result guarantees that Ŵ_j's and W_j's are also uniformly close over 1≤ j≤ p with max_1≤ j≤ p|Ŵ_j - W_j|≲Δ_ns√((log p)/n). As long as sΔ_n→ 0, this upper bound has a smaller order than the concentration rate δ_n of W_j (cf. Condition <ref>), because here δ_n ∼√(n^-1log p) as shown in our Lemma <ref> in Section <ref>. As commented after Theorem <ref>, the assumption that the coupling rate of max_1≤ j≤ p|W_j-Ŵ_j| is of a smaller order than the concentration rate δ_n plays a key role in establishing our theory on the asymptotic FDR control. We next introduce some additional notation and formally present the regularity conditions specific to this section. Observe that by symmetry, the augmented feature vector with the perfect knockoff variables has covariance matrix ^A = [ - D; - D ; ], where D is a diagonal matrix such that matrix ^A is positive definite. Let ^A = (^A)^-1 and _j = (_j, l)_l ≠ j with _j, l = - ^A_j, l /^A_j, j. It has been shown in <cit.> that the residuals e_j = X_j^ - X^_-j_j satisfy that ( e_j, X_-j^ ) = 0, ( e_j ) = 1/ ^A_j, j, and (e_j, e_l ) = ^A_j, l/^A_j, j^A_l, l. For 1 ≤ j ≤ 2p, denote by 𝒮_j = (_j) ∪(_j) ∪(_j). Let J = (^) ∪() ∪() and s := ^_0 = _0 = o(n). We make the technical assumptions below. a) For some constant C_4 > 0, ℙ (|J| ≤ C_4 s) → 1. b) For some sequence m_n ≲ s, it holds that max_1 ≤ j ≤ 2 p^A_j _0 ≤ m_n and ℙ ( max_1 ≤ j ≤ 2p |𝒮_j| ≤ C_5 m_n ) → 1 with some constant C_5>0. c) max_1 ≤ j ≤ 2p_j _2 ≤ C_6 and C_7 < λ_min (^A ) ≤λ_max (^A) < C_8 with some positive constants C_6, C_7, and C_8. [Restrictive eigenvalues] Assume that with probability 1 - o(1), min_δ_0 ≤ C_9 s δ^⊤[^ ] ^⊤^δ/ nδ_2^2 ≥ c_1 for some large enough constant C_9>0 and a small constant c_1 > 0. The features X_j's and errors e_j's are sub-Gaussian with sub-Gaussian norms X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0. Let 𝒜_n = {j ∈ℋ_1: |β_j| ≫√(n^-1log p)} and it holds that a_n := |𝒜_n| →∞. The features X_j's and the errors e_j's are sub-Gaussian satisfying X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0 and with probability 1 - O( p^-c), _j - _j _1 ≤ C n^-1/2 a_n,1,  [, ]_-j (_j - _j ) _2^2 ≤ C a_n,2 n^-1 [, ]^⊤_j _max≤ C,  ^init - _1 ≤ C s √(log p/n), where a_n,1 and a_n,2 are two possibly diverging sequences. Moreover, |σ^jk|/√(σ^jjσ^kk) < c for some constant 0< c < 1. We are now ready to state our results on the FDR control for the approximate knockoffs inference based on the debiased Lasso coefficients. Assume that Conditions <ref> and <ref>–<ref> hold, m_n / a_n → 0, and m_n^1/2 s (log p)^3/2 + 1/γ/√(n) + Δ_n s (log p)^1 + 1 /γ→ 0 for some constant 0 < γ < 1. Then we have lim sup_n →∞≤ q. Similarly as discussed in the last section, Condition <ref> is used to verify the weak dependence assumption in Condition <ref>. Conditions <ref> and <ref> are two regularity assumptions imposed for proving (<ref>). Condition <ref> contributes to verifying the general signal strength requirement in Condition <ref>. We have the parallel theorem for the k-FWER control below. Assume that Conditions <ref> and <ref>–<ref> are satisfied, k →∞, m_n / k → 0, and m_n^1/2 s (log p)^3/2 (log k)^1/γ/√(n) + Δ_n s log p → 0 for some constant 0 < γ < 1. Then for each ε > 0, we have lim sup_n →∞ ℙ ( V̂≥ k(1 + ε) ) ≤ q. Assume Conditions <ref> and <ref>-<ref> are satisfied. When (log p)^1/γ + 1/2 ( n^1/2 b_n + n^-1/2 s log p) → 0, we have lim sup_(n, p) →∞≤ q. We start with verification of Condition <ref>. Let J = (_0) ∪() ∪(). Assume Condition <ref> is satisfied. Moreover, with asymptotic probability one, it holds that |J| ≤ m = o(n) and the restricted eigenvalue of n^-1 [,]^⊤_J' [,]_J' is lower bounded by κ_c for any J' with |J'| ≤ m. Then we have ℙ( max_1 ≤ j ≤ p |Ŵ_j - W_j |≲κ_c^-1 m^3/2Δ_n (n^-1log p)^1/2 + m^1/2Δ_n n^-1/2) ≥ 1 - o(1). This result can be improved if we apply another version of condition (take advantage of the specific structure in the distance between approximate and perfect knockoff variables). Cite Lv and Fan's JASA paper on asymptotic equivalence.... We continue to verify Condition <ref>. let w_j = |β_j^0| for 1 ≤ j ≤ p. Then we can obtain the following result. Assume that ^init - ^0 _1 = o_p(1). Then we have ∑_j = 1^p ℙ( |W_j - w_j| > C √(log p/n)) → 0. Next we turn to verification of Condition <ref>. Suppose that the features X_j's and the errors e_j's are sub-Gaussian satisfying X_j _ψ_2≤ϕ and e_j _ψ_2≤ϕ for some constant ϕ > 0 and with probability 1 - O( p^-c), _j - _j _1 ≤ C n^-1/2 a_n,1,  [, ]_-j (_j - _j ) _2^2 ≤ C a_n,2 n^-1 [, ]^⊤_j _max≤ C,  ^init - _1 ≤ C s √(log p/n), where a_n,1 and a_n,2 are two possibly diverging sequences. Moreover, |σ^jk|/√(σ^jjσ^kk) < c for some constant 0< c < 1. If (log p)^1/γ + 1/2 [ n^1/2 b_n + n^-1/2 s log p ] → 0, then we have (<ref>) in Condition <ref> is satisfied. Here we assume both the features X_j and the errors e_j are sub-Gaussian for simpler presentation. In fact, if we assume X_j_ψ_2≤ϕ and sparsity of correlations between features, that is, _j_0 ≤ s = o(n). In addition, suppose the coefficients γ_j_max≤ c are bounded. Then the errors e_j is also sub-Gaussian with e_j _ψ_2≤ c s ϕ. (This results follows from the fact: If X_1, X_2 are random variables such that X_i is b_i-sub-Gaussian, then X_1 + X_2 is (b_1 + b_2)-sub-Gaussian). § KNOCKOFF VARIABLE COUPLING In this section, we present three specific constructions for the coupled perfect knockoff variables and verify that they satisfy Condition <ref> with the desired convergence rate. §.§ Knockoffs for multivariate t-distribution In this example, we will construct knockoffs for multivariate t-distributed features by leveraging only information of the first two moments; the knowledge of the t-distribution will not be utilized in the approximate knockoffs construction. Assume that the underlying true feature distribution for X = (X_1, …, X_p)^⊤ is the multivariate centered t-distribution t_ν (0, ^-1) with unknown parameters ν and ^-1. We construct the approximate knockoff variables from the Gaussian distribution with the attempt to match the first two moments of feature vector X. It is seen that the working distribution F̂ is misspecified. It has been a common practice to use the multivariate Gaussian distribution to construct knockoff variables in practice; see, e.g., <cit.>. Assume that there is an effective estimator constructed using data matrix for the precision matrix := [(X)]^-1 = ν - 2/ν. We construct the approximate knockoffs data matrix from the Gaussian distribution as = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r is a constant such that 2 r I_p - r^2 is positive definite, and ∈ℝ^n × p is independent of (, ) and consists of i.i.d. standard normal entries. Before suggesting our coupled perfect knockoff variables, it is necessary to review some properties of the multivariate t-distribution. Note that an alternative representation of X is given by X = η/√(Q / ν), where ν > 0 is the degrees of freedom, ηd∼ N( 0, ^-1), Q d∼χ_ν^2, and η Q. Here, χ_ν^2 is the chi-square distribution with ν degrees of freedom. When ν is large, the distribution of X is close to the Gaussian distribution N( 0, ^-1). We are ready to introduce our construction of the coupled perfect knockoff variable matrix = (I_p - r ) + (1/√( /ν) ) (2 r I_p - r^2 )^1/2, where (1/√( /ν)) = ( 1/√(Q_1 / ν),1/√(Q_2 / ν), …, 1/√(Q_n / ν)) with {Q_i}_i = 1^n i.i.d. random variables sampled from the conditional distribution Q|X. Let η = (η_1, …, η_n) be sampled from the conditional distribution η | X, and r and the identical realizations to those used in (<ref>). By construction, we can see that (, ) = (1/√( /ν)) (η, η(I_p - r ) + (2 r I_p - r^2 )^1/2) d=(1/√( /ν)) ( η, η ), where (η, η) have i.i.d. rows that follow a common Gaussian distribution N( 0, ^) with ^ = [ ^-1 ^-1 - r I_p; ^-1 - r I_p ^-1 ]. Thus, this verifies that forms a perfect knockoff data matrix for . The proposition below verifies that the coupling assumption in Condition <ref> holds. Assume that C_l≤^-1_2 ≤ C_u and (2 r I_p - r^2 )^-1_2≤ C_u for some constants C_u> 0 and C_l > 0. Assume further that and are both sparse in the sense that max_1 ≤ j ≤ p (_j _0 + _j _0 ) ≤ρ_n almost surely with ρ_n √(log p/n)→ 0 and ρ_n ν^-1/2→ 0, and that there exists a constant C > 0 such that ℙ ( - _2 ≥ C ρ_n (n^-1log p)^1/2) → 0. Then as ν≥ 9 and log p = o(n^1 - 4/ν), we have that for some constant C > 0, ℙ(max_1≤ j ≤ p n^-1_j - _j _2 ≤ C ( ρ_n (n^-1log p)^1/2 + ν^-1/2) ) → 1. The assumed convergence rate of ρ_n (n^-1log p)^1/2 for precision matrix estimation in (<ref>) has been verified in many existing works (e.g., <cit.>, <cit.>, and <cit.>) under the sparsity assumption. Proposition <ref> above indicates that the knockoffs procedure can potentially achieve the asymptotic FDR control even when the working distribution is misspecified but with the first two moments matched. We next compare our results to those in <cit.>. For simplicity, let us further assume that = I_p and is known. Then X d∼ t_ν ( 0, I_p) and the constructed approximate knockoff variables X̂d∼ N( 0, I_p). We set r = 1 in (<ref>) and (<ref>) when constructing the approximate and perfect knockoff matrices, hence the augmented covariance matrix in (<ref>) is given by ^ = I_2p. In such case, Proposition <ref> guarantees that ℙ( max_1 ≤ j ≤ p n^-1_j - _j _2 ≤ C ν^-1/2) → 1. This implies that Condition <ref> is satisfied with Δ_n = C ν^-1/2. Observe that X_j = Z_j/√(𝒳_ν^2 / ν) with Z_j d∼ N(0, 1) and the denominator satisfies that for an absolute constant C > 0 and ν≫log (np), ℙ( |𝒳_ν^2 / ν - 1 | ≥ C √(log (np )/ν)) = O( (np)^- C^2 / 8 ). These indicate that the multivariate t-distribution is asymptotically close to the standard Gaussian distribution when ν≫log (np). Thus, under Conditions <ref>–<ref> and <ref> for the setting of the linear model, if we construct the knockoff statistics as the regression coefficient difference from the debiased Lasso, we can prove using similar technical analysis as for Theorem <ref> that lim sup_n →∞≤ q, when ν^1/2≫ s (log p)^1 + 1/γ and s (log p)^3/2 + 1/γ/√(n)→ 0 for some 0 < γ < 1. <cit.> also derived an upper bound on the FDR inflation. Directly applying their result and calculating the KL divergence in their upper bound under the specific model setting stated above, we can obtain the lemma below. By applying Theorem 1 in <cit.>, it requires at least ν^2 ≫ n min(n, p) for lim sup_n →∞≤ q. The intuition behind Lemma <ref> above is that Theorem 1 in <cit.> requires the empirical KL divergence max_j∈ℋ_0K̂L̂_j converging to zero in probability, where K̂L̂_j = ∑_i = 1^n [ _i, j ^2/2 - ν + p/2log(1 + _i, j ^2/ν + _i, -j_2^2) - (_i, j^2/2 - ν + p/2log(1 + _i, j^2/ν + _i, -j_2^2) )]. Here, = (_i, j ) ∈ℝ^n × p consists of i.i.d. rows sampled from t_ν ( 0, I_p), while = (_i, j)∈ℝ^n × p consists of i.i.d. rows sampled from N( 0, I_p). As shown in the proof of Lemma <ref> in Section <ref> of the Supplementary Material, K̂L̂_j is a sum of i.i.d. random variables with positive mean of order O(p/ν (ν + p)). Hence, K̂L̂_j is concentrated at O(n p/ν (ν + p)) and to ensure that K̂L̂_j d→ 0, we need at least np/ν (ν + p)→ 0, or equivalently, ν^2 ≫ n min(n, p). Such condition is stronger than our requirement ν^1/2≫ s (log p)^1 + 1/γ derived from the coupling technique when s = o(√(n)) and p ≥ n. §.§ Gaussian knockoffs We now study the commonly used example of Gaussian knockoffs with the correctly specified distribution family. Assume that feature vector X = (X_1, …, X_p)^⊤d∼ N(0, ^-1) with unknown precision matrix , and we have an effective estimator for the precision matrix . Then the approximate knockoff variable matrix can be constructed as = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r > 0 is some constant such that 2 r I_p - r^2 is positive definite, and = (_i, j) ∈ℝ^n × p is independent of (, ) with independent entries Z_i,jd∼ N(0, 1). Note that the approximate knockoff variable matrix above uses the correctly specified distribution family for (i.e., the Gaussian distribution). We couple the approximate knockoff variable matrix with the perfect knockoff variable matrix = (I_p - r ) + (2 r I_p - r^2 )^1/2, where importantly, and r are exactly the same as those used in constructing . We present the result below regarding the accuracy of the approximate knockoff variables. Assume that C_l≤^-1_2 ≤ C_u and (2 r I_p - r^2 )^-1_2 ≤ C_u for some constants C_u> 0 and C_l > 0. Assume further that precision matrix and its estimator are both sparse in the sense that max_1 ≤ j ≤ p (_j _0 + _j _0) ≤ρ_n almost surely with ρ_n √(log p/n)→ 0, and that there exists a constant C > 0 such that ℙ( - _2 ≥ C ρ_n √(log p/n)) → 0. Then we have that for some constant C > 0, ℙ(max_1≤ j ≤ p n^-1/2_j - _j _2 ≤ C ρ_n √(log p/n))→ 1. Proposition <ref> above implies that Condition <ref> is satisfied with coupling accuracy Δ_n = C ρ_n √(log p/n), where ρ_n represents the sparsity level of ^-1 and its estimator. We again consider the linear model setting and construct the knockoff statistics as the regression coefficient difference from the debiased Lasso. Then it follows from Theorem <ref> that under Conditions <ref>–<ref> and <ref>, we have lim sup_n →∞≤ q provided that s ρ_n (log p )^3/2 + 1/γ = o(√(n)) for some 0 < γ < 1. Our technical analyses do not require data splitting or an independent pretraining sample. The results in <cit.> require an independent unlabeled pretraining data set with sample size N to estimate the unknown precision matrix. Specific to the model setting considered in this section, their results indicate that lim sup_n →∞≤ q when N≫ n ρ_n (log p )^2. This again shows some potential advantage of our coupling technique in the robustness analyses. §.§ Nonparanormal knockoffs We further investigate a much more general distribution family, that is, the Gaussian copula distributions. Assume that X = (X_1, …, X_p)^⊤ has marginal distributions X_j d∼ F_j(·) and satisfies that (Φ^-1 (F_1(X_1)), …, Φ^-1(F_p(X_p)))^⊤d∼ N( 0, ^-1), where the diagonal entries of ^-1 are all one. Further assume that we have effective estimators F̂_j for F_j and for . Define = (_i, j) ∈ℝ^n× p with _i,j = Φ^-1 (F̂_j (_i, j) ) and = (_i, j) ∈ℝ^n× p with _i,j = Φ^-1 (F_j (_i, j) ). Let = (_i, j) ∈ℝ^n × p be given by = (I_p - r ) + (2 r I_p - r^2 )^1/2, where r > 0 is some constant such that 2 r I_p - r^2 is positive definite, and = (_i, j) ∈ℝ^n × p is independent of (, ) with i.i.d. entries Z_i,jd∼ N(0, 1). We construct the approximate knockoff variable matrix as = (_i, j) with _i, j = F̂_j^-1 (Φ(_i, j)). It is seen that this example also uses the correctly specified distribution family for X, i.e., the Gaussian copula. We suggest to construct the coupled perfect knockoff variable matrix as = (_ij) with _i,j = F_j^-1 (Φ(_i,j)), where _i,j represents the (i,j)th entry of matrix = (I_p - r ) + (2 r I_p - r^2 )^1/2 with and r identical in values to the ones used in (<ref>). The proposition below characterizes the coupling rate between and . Assume that (<ref>) is satisfied and both and are sparse in the sense that max_1≤ j ≤ p (_j _0 +_j _0) ≤ρ_n with p ρ_n = o(n /(log n)^3) almost surely. Assume further that for 1 ≤ j ≤ p, the distribution estimators satisfy 1/2n≤F̂_j (x) ≤ 1 - 1/2n for each x ∈(X_j), (X_j) ⊂ [-b, b] for some constant b >0, and there exists a constant M > 0 such that ℙ( max_1 ≤ j ≤ psup_ x ∈ [ 2M n^-1log n, 1 - 2M n^-1log n] | F̂_j^-1 (x) - F_j^-1 (x) | ≥ (M n^-1 log n )^1/2) → 0, ℙ( max_1 ≤ j ≤ psup_x ∈ (F_j^-1(2M n^-1log n), F_j^-1 (1 - 2M n^-1log n) ) | F̂_j(x) - F_j(x) | / F_j(x) [1 - F_j(x)] ≥ (M n^-1 log n )^1/2) → 0, ℙ( max_1 ≤ j ≤ psup_ x, y ∈ (0, 1) | F̂_j^-1 (x) - F̂_j^-1 (y) | / |x - y| + ( n^-1 (log n) |x - y| )^1/2 + n^-1log n ≥ M ) → 0. Then we have ℙ(max_1≤ j ≤ p n^-1_j - _j _2 ≤ C ( ρ_n √(log p/n) + √( p ρ_n (log n)^3/n)) ) → 1. When estimators {F̂_j}_j = 1^p are the empirical distribution functions and p = o(n), it can be shown that (<ref>), (<ref>), and (<ref>) can be satisfied when the density function f_X_j is uniformly bounded on the support. See, e.g., <cit.> for the estimation of nonparanormal distributions, and we opt not to discuss it here due to the space constraint. We also remark that the bounded support assumption of (X_j) ⊂ [-b, b] is to simplify the technical proofs and may be removed by applying the truncation technique and letting b slowly diverge with n. Since such technical relaxation is not the main focus of the current paper, we choose not to explore it here. § DISCUSSIONS We have investigated in this paper the robustness of the model-X knockoffs framework introduced in <cit.> by characterizing the feature selection performance of the approximate knockoffs (ARK) procedure, a popularly implemented version of the model-X knockoffs framework in practice. The approximate knockoffs procedure differs from the model-X knockoffs procedure in that it uses the misspecified or estimated feature distribution to generate the knockoff variables without the use of sample splitting. We have proved formally that the approximate knockoffs procedure can achieve the asymptotic FDR and FWER control as the sample size diverges in the high-dimensional setting. A key idea empowering our technical analysis is coupling, where we pair statistics in the approximate knockoffs procedure with those in the model-X knockoffs procedure so that they are close in realizations with high probability. The knockoff variable coupling has been investigated under some specific distribution assumptions in the current work. An interesting future study is to investigate the coupling idea under broader class of or even general feature distributions. chicago Supplementary Material to “ARK: Robust Knockoffs Inference with Coupling” Yingying Fan, Lan Gao and Jinchi Lv This Supplementary Material contains the proofs of Theorems <ref>–<ref>, Propositions <ref>–<ref>, and some key technical lemmas. All the notation is the same as defined in the main body of the paper. § PROOFS OF THEOREMS <REF>–<REF> AND PROPOSITIONS <REF>–<REF> §.§ Proof of Theorem <ref> It has been shown in <cit.> that the model-X knockoffs inference procedure achieves the exact FDR control when the perfect knockoff statistics are employed. Note that the approximate knockoff statistics {Ŵ_j } are expected to provide a reliable approximation to the perfect knockoff statistics {W_j }, as assumed in Condition <ref>. The main idea of the proof is to establish the FDR control for the approximate knockoffs inference procedure through a comparison of the approximate knockoff statistics and a certain realization of the perfect knockoff statistics. The two lemmas below provide a sketch of the proof and can be established under Conditions <ref>–<ref>. Assume that Conditions <ref>, <ref>, and <ref> are satisfied. When a_n →∞ and m_n / a_n → 0, we have that sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ] | ∑_j ∈ℋ_01 (Ŵ_j ≥ t) /∑_j ∈ℋ_0ℙ ( W_j ≥ t ) - 1 | = o_p(1), sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ] | ∑_j ∈ℋ_01 (Ŵ_j ≤ -t) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) - 1 | = o_p(1). sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ) | ∑_j ∈ℋ_01 (Ŵ_j ≥ t) /∑_j ∈ℋ_0ℙ ( W_j ≥ t ) - 1 | = o_p(1), sup_t ∈(0, G^-1 ( c_1 q a_n / p ) ) | ∑_j ∈ℋ_01 (Ŵ_j ≤ - t) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) - 1 | = o_p(1) . Under Conditions <ref>–<ref>, we have that for some constant 0 < c_1 < 1, ℙ ( T ≤ G^-1 ( c_1 q a_n / p ) ) → 1. Under Condition <ref>, we have sup_t ∈ (0, M_n, p) ∑_j ∈ H_01 (t - b_n ≤W_j ≤ t + b_n) /∑_j ∈ H_0ℙ ( W_j ≥ t ) = o_p(1) We present the proofs of Lemmas <ref> and <ref> in Sections <ref> and <ref>, respectively. Now we are ready to prove Theorem <ref>. Let us define two events ℬ_1 = {T ≤ G^-1 ( c_1 q a_n/p ) } and ℬ_2, ϵ = {sup_t ∈ (0, G^-1 ( c_1 q a_n/p )](| ∑_j ∈ℋ_01 (Ŵ_j ≥ t )/∑_j ∈ℋ_0ℙ (W_j ≥ t ) - 1 | | ∑_j ∈ℋ_01 (Ŵ_j ≤ - t )/∑_j ∈ℋ_0ℙ (W_j ≤ - t ) - 1 | )≤ϵ} for ϵ > 0. Lemmas <ref> and <ref> above have shown that ℙ(ℬ_1^c ) → 0 and ℙ(ℬ_2, ϵ^c ) → 0 for each ϵ > 0. In addition, it holds naturally that 0 ≤≤ 1. Then it follows that ≤( ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) ) + ℙ(ℬ_1^c ) + ℙ(ℬ_2, ϵ^c ) = ( ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) ) + o(1) . In view of the definition of threshold T in (<ref>), we can deduce that ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1 (ℬ_2, ϵ) = ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ -T) ·∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T) / 1 ∑_ j = 1^p 1 (Ŵ_j ≥ T) ·1 (ℬ_1)1(ℬ_2, ϵ) ≤ q ·∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T ) ·1 (ℬ_1)1(ℬ_2, ϵ). Furthermore, it is easy to see that on event ℬ_1 ∩ℬ_2, ϵ, we have ∑_j ∈ℋ_01 ( Ŵ_j ≥ T ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - T ) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n/p )]∑_j ∈ℋ_01 ( Ŵ_j ≥ t ) /∑_j ∈ℋ_0 1 (Ŵ_j ≤ - t ) ≤1 + ϵ/1 - ϵsup_t ∈ (0, G^-1 ( c_1 q a_n/p )]∑_j ∈ℋ_0ℙ ( W_j ≥ t ) /∑_j ∈ℋ_0ℙ ( W_j ≤ - t ) = 1 + ϵ/1 - ϵ, where the last equation above is obtained by the symmetry of the perfect knockoff statistics {W_j }_j ∈ℋ_0 that ℙ ( W_j ≥ t ) = ℙ ( W_j ≤ - t ). Therefore, we can obtain that for each ϵ > 0, ≤ q ·1 + ϵ/1 - ϵ + o(1), which yields the desired result (<ref>). This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> We first define the corresponding threshold T_v for the perfect knockoff statistics {W_j}_j=1^p in the model-X knockoffs inference for the k-FWER control as T_v = sup{ t ∈𝒲: #{j: - W_j ≥ t} = v }, where v is defined as in (<ref>) and 𝒲 = { | W_1 |, …, | W_p | }. As sketched in Lemmas <ref>–<ref> below, the main idea of the proof is to show that the threshold T_v based on the approximate knockoff statistics and the threshold T_v based on the perfect knockoff statistics are sufficiently close under Condition <ref> such that for any > 0, the number of W_j's that lie between T_v and T_v is at most v with asymptotic probability one, where v satisfies v / k → 1 as k →∞. Specifically, let M_v be the integer such that T_v + M_v≥T_v - 2 b_n > T_v + M_v +1. Then we can establish a bound for M_v as shown in Lemma <ref> below. We first present the three lemmas below that provide an outline of the proof. The proofs of Lemmas <ref>–<ref> are provided in Sections <ref>–<ref>. Under Condition <ref>, we have that ℙ ( | T_v - T_v | ≥ b_n) → 0. Assume that k →∞. Then we have that v / k = 1 + O ( k^ - 1 /2 ). Under all the conditions of Theorem <ref>, we have that sup_t ∈ (0, G^-1 (v/2p) )| ∑_j ∈ℋ_01 ( W_j ≥ t ) /∑_j ∈ℋ_0 ℙ ( W_j ≥ t ) - 1 | = o_p (ϵ_n) Under all the conditions of Theorem <ref>, we have that for each > 0, ℙ ( M_v ≤ v ) → 1. We are now ready to prove Theorem <ref>. It follows straightforwardly from Lemma <ref> that ℙ ( V̂≥ k(1 + 2 ) ) = ℙ( ∑_j ∈ℋ_01 ( Ŵ_j ≥T_v ) ≥ k(1 + 2) ) ≤ℙ( ∑_j ∈ℋ_01 ( W_j ≥T_v - 2 b_n ) ≥ k(1 + 2) ) ≤ℙ( ∑_ j ∈ℋ_01 ( W_j ≥T_v + M_v ) ≥ k (1 + 2) ) = ℙ( ∑_ j ∈ℋ_01 ( - W_j ≥T_v + M_v ) ≥ k (1 + 2) ) ≤ℙ( ∑_j ∈ℋ_0 1 ( - W_j ≥T_v) ≥ k(1 + 2 ) - M_v ), where the second last step above is because of the symmetry of W_j's with j∈ℋ_0 and the last step above is due to ∑_ j ∈ℋ_01 ( - W_j ≥T_v + M_v )-∑_ j ∈ℋ_01 ( - W_j ≥T_v )≤ M_v by the definitions of T_v and M_v. Moreover, Lemma <ref> above shows that M_v ≤ v with asymptotic probability one and Lemma <ref> above proves that v / k = 1 + o(1). Then it holds that 2 k > M_v with asymptotic probability one. Hence, combining the above results and by the union bound, we can deduce that ℙ ( V̂≥ k(1 + 2 ) ) ≤ℙ( ∑_j ∈ℋ_01( - W_j ≥T_v ) ≥ k ) + o(1) = q + o(1). Consequently, it follows that for each > 0, lim sup_n →∞ℙ ( V̂≥ k(1 + 2 ) ) ≤ q . This concludes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The main idea of the proof is to directly apply Theorem <ref> by verifying Conditions <ref>–<ref> involved. We will show in the lemmas below that Conditions <ref>–<ref> are satisfied for the marginal correlation knockoff statistics under Conditions <ref>–<ref> and the setting of nonparametric regression model (<ref>) with normal features. Proofs of Lemmas <ref>–<ref> are presented in Sections <ref>–<ref>. Assume that Condition <ref> is satisfied. Then we have that ℙ( max_1 ≤ j≤ p | Ŵ_j - W_j | ≥Δ_n ) → 0. Lemma <ref> above shows that Condition <ref> is satisfied with sequences b_n := Δ_n. Define w_j = ( Y^2)^-1/2 ( | (X_j Y)| - |(X_j Y)| ) for 1 ≤ j ≤ p. Note that w_j = 0 for j ∈ℋ_0 since (X_j, X_ℋ_1) d= (X_j, X_ℋ_1) for j ∈ℋ_0 by the exchangeability between X_j and X_j. Recall from the definition in (<ref>) that δ_n = √(log p/n)max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }. We have the concentration inequality below for W_j under the sub-Gaussian assumption in Condition <ref>. Assume that Condition <ref> is satisfied. When log p = o(n), we have that ∑_j = 1^p ℙ ( |W_j - w_j | ≥δ_n ) ≤ 6 p^-1 + p exp{ - n ( Y^2)^2 /8 Y^4 }. Lemma <ref> above indicates that Condition <ref> related to the concentration rate of W_j is satisfied with δ_n defined in (<ref>) and that Δ_n ≤δ_n, where Δ_n is the approximation accuracy of the approximate knockoff statistics obtained in Lemma <ref>. In addition, from the definition of w_j, under Condition <ref> we have that the general Condition <ref> on the signal strength is also satisfied. Next we will turn to the verification of Conditions <ref>–<ref>. Assume that Condition <ref> is satisfied. Then we have that for each t ≥ 0, ( ∑_j ∈ℋ_01 (W_j ≥ t) ) / p_0 G (t) ≤ 2 m_n. Assume that Conditions <ref> and <ref> are satisfied. Then when (log p)^1/γ m_n / a_n → 0 and √(n)Δ_n (log p)^1/2 + 1/γ→ 0 for some constant 0< γ < 1, we have that (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - Δ_n ) - G(t + Δ_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + Δ_n ) → 0 as n →∞. Lemma <ref> above shows that Condition <ref> is satisfied, while Lemma <ref> above implies that Condition <ref> is satisfied. Finally, the conclusion of Theorem <ref> can be obtained by directly applying the general Theorem <ref>. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The proof of Theorem <ref> is analogous to that of Theorem <ref> in Section <ref>. We omit the detailed proof here to avoid redundancy. §.§ Proof of Theorem <ref> The main idea of the proof is to directly apply Theorem <ref> by verifying Conditions <ref>–<ref> for the knockoff statistics constructed from the debiased Lasso coefficients. A key observation is that the debiased Lasso coefficients are asymptotically normal. Denote by τ_j = _j _2 / |_j^⊤^_j|. The debiased Lasso coefficient can be written as √(n) ( β_j - β_j^ ) = _j^⊤/_j _2 ·√(n)τ_j + ∑_k ≠ j√(n)_j^⊤^_k (β_k^ - β_k^)/_j^⊤^_j . Observe that _j^⊤/_j _2 ∼ N(0, σ^2), √(n)τ_j =O_p(1), and the remainder term in (<ref>) above is of order o_p(1). Thus, the debiased Lasso estimator is asymptotically normal in the sense that τ_j^-1 (β_j - β_j^) d→ N(0, σ^2). Our proof will build mainly on such intuition. Throughout the proof below, constant C may take different values from line to line. We first present two lemmas below about the consistency of Lasso estimators ^ and _j. We omit the proofs of Lemmas <ref> and <ref> here to avoid redundancy since they are well-known results for the consistency of Lasso estimators in the literature. Under Conditions <ref>–<ref>, we have that with probability 1 - O(p^-3), ^ - ^_1 ≤ C s √(log p/n), ^ - ^_2 ≤ C √( slog p/n) ^ (^ - ^ ) _2 ≤ C √( slog p) . Under Conditions <ref>–<ref>, we have that with probability 1 - O(p^-3), max_1 ≤ j ≤ 2p _j - _j _1 ≤ C m_n √(log p/n) , max_1 ≤ j ≤ 2p _j - _j _2 ≤ C √( m_nlog p/n), max_1 ≤ j ≤ 2p _-j^ (_j - _j) _2 ≤ C √( m_n log p). In addition, when m_n log p/n→ 0 we have that with probability 1 - O(p^-3), |√(n)τ_j - ( e_j^2)^-1/2 | ≤ C √(m_n log p/n), |_j^⊤_l - (e_j, e_l) | ≤ C √(m_n log p/n). The four lemmas below outline the proof for verifying the general Conditions <ref>–<ref>. Proofs of Lemma <ref>–<ref> are provided in Sections <ref>–<ref>. Assume that Conditions <ref> and <ref>–<ref> are satisfied. Then as Δ_n s^1/2→ 0 and √(s log p/n)→ 0, we have that ℙ( max_1 ≤ j ≤ 2p | β_j - β̂_j | ≥ C Δ_n s √(log p/n)) → 0. Lemma <ref> above indicates that Condition <ref> is satisfied with sequences b_n := C Δ_n s √(log p/n). Let us define w_j = |β_j|. Assume that Conditions <ref>–<ref> are satisfied. Then as s √(m_n log p/n)→ 0, we have that for some C > 0, ∑_j= 1^p ℙ (|W_j - w_j | ≥ C √(n^-1log p) ) → 0. Lemma <ref> above shows that Condition <ref> related to the concentration rate of W_j is satisfied with δ_n = C √(n^-1log p). In addition, it holds that b_n ≪ C √(n^-1log p) due to the assumption Δ_n s → 0 in Theorem <ref>. In addition, in light of the definition of w_j, under Condition <ref> we have that the general Condition <ref> on the signal strength is also satisfied. We next turn to the verification of Conditions <ref>–<ref>. Assume that Conditions <ref>–<ref> are satisfied. Then as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, we have that ( ∑_j ∈ℋ_01 (W_j > t) ) ≤ V_1 (t) + V_2 (t), where for some 0 < γ < 1 and 0< c_1 < 1, (log p )^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_1 (t) / [p_0 G (t)]^2 → 0 and sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_2(t) / p_0 G (t) ≲ m_n. Assume that Conditions <ref> and <ref>–<ref> are satisfied. Then when m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0 and Δ_n s (log p)^1 + 1 /γ→ 0, we have that (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - b_n ) - G(t + b_n) / G(t) → 0 and a_n^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) → 0 as n →∞. Lemma <ref> above shows that Condition <ref> is satisfied, whereas Lemma <ref> implies that Condition <ref> is satisfied. Finally, the conclusion of Theorem <ref> can be derived by directly applying the general Theorem <ref>. This completes the proof of Theorem <ref>. §.§ Proof of Theorem <ref> The proof of Theorem <ref> is similar to that of Theorem <ref> in Section <ref>. Hence we omit the detailed proof here to avoid redundancy. §.§ Proof of Proposition <ref> From the definitions in (<ref>) and (<ref>), we see that - = r + + ( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) , where = -, =(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2, and = (2 r I_p - r^2 )^1/2. In view of assumption (<ref>) and the fact that := [(X)]^-1 = ν - 2/ν, it follows from the triangle inequality that with probability 1 - o(1), - _2 ≤ - _2 + - _2 = - _2 + 2 ν^-1_2 ≤ C ρ_n √(log p/n) + 2 ν^-1 C_l^-1. Now we deal with the three terms on the right-hand side of (<ref>) above separately. First, for the second term above, an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≤ 3 _2^2 /2 ≤ C - _2^2 ≤ C ( ρ_n^2 log p/n + ν^-2). Regarding the first term on the right-hand side of (<ref>) above, observe that (_i, j , _i, l ) d= (η_i, j/√(Q_i / ν), η_i, l/√(Q_i / ν)), where (η_i, 1, …, η_i, p) d∼ N( 0, ^-1) and {Q_i}_i = 1^n are independent and identically distributed (i.i.d.) chi-square random variables with ν degrees of freedom. It holds that for some large constant C_1 > 0, ℙ( n^-1 ^⊤ - ^-1_max≥ C_1 √(log p/n) + ν^-1/2) = ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n η_i, jη_i, l/Q_i / ν - (η_i, jη_i, l) ( ν/Q_i ) | ≥ C_1 √(log p/n)+ ν^-1/2) ≤ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n)) +ℙ(max_1 ≤ j, l ≤ p|n^-1∑_i = 1^n (η_i, jη_i, l) ( ν/Q_i - ( ν/Q_i ) )| ≥ν^-1/2). Before showing the bounds for the two probabilities on the right-hand side of the expression above, we first present some basic results for chi-square random variables. Note that from the property of the chi-square distribution, we have through some immediate calculations that ( ν^2/Q_i^2) = ν^2/(ν - 2) (ν - 4), ( ν/Q_i ) = ν^2 /(ν - 2)(ν - 4) - (ν/ν - 2)^2 = O(ν^-1), ( ν^2/Q_i^2 ) = ν^4/(ν - 2)(ν - 4)(ν - 6)(ν - 8) - (ν^2 /(ν - 2)(ν - 4))^2 = O (ν^-1). Thus, noting that ( ν^2/Q_i^2) + ν^-1/2 = ν^2/(ν - 2) (ν - 4) + ν^-1/2≤ 3 and ( ν^2/Q_i^2) - ν^-1/2≥ 2/3 when ν≥ 9, an application of the Markov inequality leads to ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≥ 3 ) + ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≤ 2/3 ) ≤ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≥( ν^2/Q_i^2) + ν^-1/2) + ℙ( n^-1∑_i = 1^n ν^2/Q_i^2≤( ν^2/Q_i^2) - ν^-1/2) ≤ν n^-1( ν^2/Q_i^2 ) = O(n^-1 ) → 0. In addition, noting that e^-x/2 ≤ 1 and Stirling's formula for the gamma function Γ(x) = √(2 π / x) (x/ e)^x (1 + O(x^-1)) for x ≥ 0, we have through applying the density function of the chi-square distribution that for each constant C > 0, ℙ( max_1 ≤ i ≤ n ν/Q_i≥ C √(n/log p)) ≤ n ∫_0^C^-1ν√(log p/n)x^ν / 2 - 1 e^- x / 2/ 2^ν / 2Γ(ν / 2) dx ≤ 2 n (C^-1ν√(log p/n))^ν / 2/ν 2^ν / 2Γ(ν / 2) ≲ n ( C^-2log p/n)^ν / 4ν ^ν /2 /ν 2^ν /2√(4 π / ν) (ν / 2 e)^ν / 2 = ( C^-2 e^2 log p/n^1 - 4/ ν)^ν / 4 1/√(4 πν)→ 0 when log p = o(n^1 - 4/ν). Now we are ready to deal with the two probabilities on the right-hand side of (<ref>) above. Let us define two events 𝒟_1 = {max_1 ≤ i ≤ nν/Q_i≤ C_2 √(n/log p)} for a small constant C_2 > 0 and 𝒟_2 = {2/3 ≤ n^-1∑_i = 1^n ν^2/Q_i^2≤ 3 }. It follows from (<ref>) and (<ref>) that ℙ (𝒟_1^c) → 0 and ℙ (𝒟_2^c) → 0. For the first probability in (<ref>) above, since η_i, jη_i, l is a sub-exponential random variable and Q_i η_i, jη_i, l, we can obtain by applying the concentration inequality for the weighted sum of sub-exponential random variables (cf. Corollary 4.2 in <cit.>) that when C_1 is large enough and C_2 is small enough, ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n)) ≤ℙ(max_1 ≤ j, l ≤ p| n^-1∑_i = 1^n ν/Q_i (η_i, jη_i, l - (η_i, jη_i, l)) | ≥ C_1 √(log p/n) , 𝒟_1 ∩𝒟_2 ) + ℙ (𝒟_1^c) + ℙ (𝒟_2^c) ≤ 2 p^2 exp{ - 3 log p } + o(1) → 0. Regarding the second probability in (<ref>), since max_1 ≤ j, l ≤ p | (η_i, jη_i, l) | ≤max_1 ≤ j ≤ p(η_i, j^2) ≤max_1 ≤ j ≤ p (^-1)_j, j≤ C_u, an application of the Markov inequality and (<ref>) yields that ℙ(max_1 ≤ j, l ≤ p|n^-1∑_i = 1^n (η_i, jη_i, l) ( ν/Q_i - ( ν/Q_i ) )| ≥ν^-1/2) ≤ℙ( |n^-1∑_i ( ν/Q_i - ( ν/Q_i ) )| ≥ C_u^-1ν^-1/2) ≤ C_u^-2ν n^-1 (ν/Q_i) = O(n^-1) → 0. By plugging (<ref>) and (<ref>) into (<ref>), we can show that with probability 1 - o(1), max_δ: δ_0 ≤ρ_n |δ^⊤ ( n^-1^⊤ - ^-1 ) δ |/δ_2^2 ≤ C ρ_n ( √(log p/n) + ν^-1/2) , which along with the fact ^-1_2 = ν/ν - 2^-1_2 ≤ν/ν - 2 C_u entails that as ρ_n = o(√(n / (log p))) and ρ_n = o(√(ν)), max_δ: δ_0 ≤ρ_nδ^⊤^⊤δ/ n δ_2^2 ≤C for some constant C > 0. Using (<ref>) and the sparsity assumption that max_1 ≤ j ≤ p_j_0 + _n_0 ≤ρ_n, an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1_j _2^2 = n^-1_j^⊤ ^⊤_j ≤ C max_1 ≤ j ≤ p_j ^2_2 = C - _2^2 ≤ C( ρ_n^2 log p/n + ν^-2). We now proceed with examining the third term on the right-hand side of (<ref>) above. Observe that _j d∼ N( 0, _j _2^2 I_n) and max_1 ≤ j ≤ p_j _2 ≤_2 ≤ 2r. Hence, it holds for some large constant C_3 > 0 that ℙ( max_1 ≤ j ≤ p n^-1( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) _j _2^2 ≥ C_3 ν^-1) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 _j ^2 Z_i^2 ≥ C_3 ν^-1) ≤ℙ( n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 Z_i^2 ≥ C_3 ν^-1 / 4r^2 ) , where {Z_i }_i = 1^n are i.i.d. standard normal random variables that are independent of and {Q_i }_i = 1^n. Similar to the calculations in (<ref>) and (<ref>), we can deduce that [ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] = (Z_i^2) [ (1 - 1/√(Q_i / ν))^2 ] = 1 - ( 2/√(Q_i / ν)) + ( 1/ Q_i / ν) = 1 - √(2 ν)Γ(ν - 1/2)/Γ(ν/2) + ν/ν - 2 and [ (1 - 1/√(Q_i / ν))^4 Z_i^4 ] = 3 ( 1 - 2 √(2)νΓ(ν - 1/2)/Γ(ν/2) + 6 (ν - 2)/ν - √(2)ν^3/2Γ(ν - 3/2)/Γ(ν/2) + ν^2/(ν - 2) (ν - 4)) By applying the asymptotic series of the gamma function Γ(x + 1/2 )/Γ(x) = √(x)(1 - 1/8 x + O(x^-2) ), we can obtain through some direct calculations that [ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] = O(ν^-1) and [ (1 - 1/√(Q_i / ν))^4 Z_i^4 ] = O(ν^-2). Combining (<ref>) and (<ref>) and applying the Markov inequality, we have that for some large enough constant C_3 > 0, ℙ( max_1 ≤ j ≤ p n^-1( 1 - 1/√(Q_1/ν), …, 1 - 1/√(Q_n/ν)) _j _2^2 ≥ C_3 ν^-1) ≤ℙ( n^-1∑_i = 1^n (1 - 1/√(Q_i / ν))^2 Z_i^2 -[ (1 - 1/√(Q_i / ν))^2 Z_i^2 ] ≥ C_3 (ν^-1 ) / 4r^2 - O(ν^-1) ) ≤ C ν^-2 n^-1( (1 - 1/√(Q_i / ν))^2 Z_i^2 ) ≤ C ν^-2 n^-1( ( (1 - 1/√(Q_i / ν))^4 Z_i^4 ) ) = O (n^-1) → 0. Therefore, a combination of (<ref>), (<ref>), (<ref>), and (<ref>) yields the desired conclusion in (<ref>). This concludes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> It follows from (<ref>) and (<ref>) that - = r + , where = - and =(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2. By the Gaussianity of X, we see that X_j X_l is a sub-exponential random variable and thus for 0< u < C, ℙ ( | n^-1_j^⊤_l - (X_j X_l) | ≥ u ) ≤ 2 exp{ - C n u^2 }. Then we can obtain that ℙ( max_1 ≤ j ≤ p, 1 ≤ l ≤ p | n^-1_j_l - (X_j X_l) | ≥ C √(log p/n)) = o(1). Consequently, with probability 1 - o(1) it holds that max_δ: δ_0 ≤ρ_n |δ^⊤ ( n^-1^⊤ - ^-1 ) δ |/δ_2^2 ≤ C ρ_n √(log p/n), which combined with the assumption that ^-1_2 ≤ C_u leads to max_δ: δ_0 ≤ρ_nδ^⊤^⊤δ/ n δ_2^2 ≤ C_u + C ρ_n √(log p/n)≤C for some constant C > 0. Since _j _0 = ( - )_j_0 ≤ C ρ_n because of the sparsity of and , it follows from (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j_2^2 = max_1 ≤ j ≤ p n^-1_j_2^2 ≤max_1 ≤ j ≤ pC_j _2^2 = max_1 ≤ j ≤ pC (- )_j _2^2 ≤max_1 ≤ j ≤ pC- _2^2 ≤Cρ_n^2 log p/n, where we have used the accuracy assumption in (<ref>). Next we proceed with analyzing term . Observe that given , has i.i.d. standard normal components and is independent of , and hence _j|_j d∼ N( 0, _j_2^2 I_n). It holds that _j|_j d= (Z_1 _j_2, …, Z_n _j _2) with { Z_i }_i = 1^n i.i.d. standard normal random variables. Then we can deduce that ℙ ( max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≥ 3_2^2 /2 | ) = ℙ ( max_1 ≤ j ≤ n^-1_j _2^2 ≥ 3_2^2 /2 |) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n Z_i^2 _j _2^2 ≥ 3 _2^2 /2 |) ≤ℙ( n^-1∑_i = 1^n Z_i^2 _2^2 ≥ 2 _2^2 |) = ℙ( n^-1∑_i = 1^n Z_i^2 ≥ 3/2 ) ≤ e^- n / 32→ 0 as n→∞, where we have used the fact that max_1 ≤ j ≤ p_j _2 ≤_2 and the concentration inequality for chi-square random variables that for 0 < t < 1, ℙ( | n^-1∑_i = 1^n Z_i^2 - 1 | ≥ t ) ≤ 2 e^- n t^2 / 8. Now we aim to bound _2. For two square matrices A and B, it holds that A^1/2 - B^1/2_2 = A^1/2 (B - A) B^-1 + (A^3/2 - B^3/2) B^-1_2 ≤A^1/2 (B - A) B^-1_2 +3 max{ A _2^1/2, B _2^1/2}A - B _2B^-1_2. Applying the above inequality to leads to _2 ≤ 2 r I_p - r^2 _2^1/2· r^2 - _2 · 2 r I_p - r^2 ^-1 + 3 max{ 2 r I_p - r^2 _2^1/2, 2 r I_p - r^2 _2^1/2}· r^2 - _2· 2 r I_p - r^2 ^-1 ≤ C - _2. Thus, from (<ref>) and assumption (<ref>), we can obtain that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ()_j _2^2 ≤ 3 _2^2 /2 ≤ C - _2^2 ≤ C ρ_n^2 log p/n. Note that _j - _j _2 ≤ r _j _2 + _j _2. Therefore, in view of (<ref>) and (<ref>) we can show that for some constant C > 0, ℙ(n^-1/2_j - _j _2 ≤ C ρ_n √(log p/n)) → 1. This completes the proof of Proposition <ref>. §.§ Proof of Proposition <ref> In light of the definitions of and , we can obtain through the triangle inequality that n^-1/2max_1 ≤ j ≤ p_j - _j _2 ≤max_1 ≤ j ≤ p n^-1/2( ∑_i = 1^n [F̂_j^-1(Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 )^1/2 + max_1 ≤ j ≤ p n^-1/2( ∑_i = 1^n [F̂_j^-1(Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 )^1/2. We claim that ℙ(max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 ≥C( ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) → 0, ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 ≥2 M p (log n)^2 /n) → 0, which together with (<ref>) yields the desired conclusion of Proposition <ref>. It remains to establish (<ref>) and (<ref>). We will begin with the proof of (<ref>). Proof of (<ref>). From assumption (<ref>) and the observation that log n/n^2≪p ρ_n (log n)^3/n, it holds that for some large constant C > 0, ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F̂_j^-1 (Φ(_i, j )) ]^2 ≥ C (ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) ≤ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [ | Φ(_i, j) - Φ(_i, j)|^2 + (log n)^2 n^-2 + n^-1 (log n )|Φ(_i, j) - Φ(_i, j)| ] ≥ C (ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) + ℙ( max_1 ≤ j ≤ psup_ x, y ∈ (0, 1) | F̂_j^-1 (x) - F̂_j^-1 (y) | / |x - y| + ( n^-1 (log n) |x - y| )^1/2 + n^-1log n ≥ M ) ≤ℙ(max_1 ≤ j ≤ p n^-1 ∑_i = 1^n [ | Φ(_i, j) - Φ(_i, j)|^2 + n^-1 (log n) |Φ(_i, j) - Φ(_i, j)| ] ≥C(ρ_n^2 log p/n + p ρ_n (log n)^3/n) ) + o(1) : = P_1 + o(1). We next bound term P_1 above. Using the fact that |Φ(x) - Φ(y)| ≤1/√(2 π) | x - y | and the basic inequality ∑_i = 1^n |a_n| ≤√(n) (∑_i = 1^n a_n^2)^1/2, we have that P_1 ≤ℙ( max_1 ≤ j ≤ p( n^-1_j - _ j_2^2 + (log n) n^-3/2_j - _ j_2 ) ≥C(ρ_n^2 log p/n + p ρ_n (log n)^3/n) ). It suffices to consider the bound of max_1 ≤ j ≤ p n^-1_j - _ j_2^2. With the aid of the triangle inequality and the definitions of and , it follows that max_1 ≤ j ≤ p n^-1_j - _ j_2^2 ≤ 3 max_1 ≤ j ≤ p n^-1 ( - ) (I_p - r )_j _2^2 + 3 r^2 max_1 ≤ j ≤ p n^-1 ( _j - _j ) _2^2 + 3 max_1 ≤ j ≤ p n^-1 [(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2 ] _2^2. We will investigate the three terms in the upper bound above separately. Regarding the third term above, under the assumption in (<ref>) it has been shown in (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 [(2 r I_p - r^2 )^1/2 - (2 r I_p - r^2 )^1/2 ] _2^2 ≤ C ρ_n^2 log p/n . As for the second term in the upper bound in (<ref>), noting that the rows of are i.i.d. and follow the Gaussian distribution N( 0, ^-1), an application of similar arguments as for (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ p n^-1 ( _j - _j ) _2^2 ≤ Cρ_n^2 log p/n . For the first term in the upper bound in (<ref>) above, noting that I_p - r)_j ≤ρ_n + 1 by the sparsity assumption that _j≤ρ_n, we have that max_1 ≤ j ≤ p n^-1 ( - ) (I_p - r )_j _2^2 ≤max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ×max_1 ≤ j ≤ p (I_p - r )_j _2^2. For the second term in the bound above, from the triangle inequality and inequality _j_2 ≤_2 for each matrix , it is easy to see that max_1 ≤ j ≤ p (I_p - r )_j _2 ≤ I_p - r _2 ≤ I_p - r _2 + r - _2. Thus it follows from assumption (<ref>) that for a constant C > 0, with probability 1 - o(1) we have max_1 ≤ j ≤ p (I_p - r )_j _2 ≤ C. Regarding the first term on the right-hand side of (<ref>) above, using the definitions of and , and inequality _2 ≤ d _max for each square matrix ∈ℝ^d × d, we can deduce that max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ≤ (ρ_n + 1) n^-1 ( - )^⊤ ( - ) _max ≤ (ρ_n + 1) max_1 ≤ j ≤ p n^-1∑_i = 1^n |_i, j - _i, j|^2 = (ρ_n + 1) max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 . Denote by H_j, n =[F_j^-1 (2M n^-1 log n), F_j^-1 (1 - 2M n^-1log n )] with constant M as given in assumption (<ref>). We can write that max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 = max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 1 (_i, j∈ H_j, n) + max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) - Φ^-1 ( F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) := E_1 + E_2. Let us first consider term E_2 above. Observe that E_2 ≤max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F̂_j (_i, j )) |^2 1 (_i, j∉ H_j, n) + max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n). For the first term in the bound above, notice that |Φ^-1 (F̂_j (_i, j)) |= O(√(log n )) due to the assumption that 1/2n≤ F_j(x) ≤ 1 - 1/2n for each x∈(X_j). Then it follows from the union bound, the Markov inequality, and the definition of H_j, n that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤∑_j = 1^p ℙ( n^-1log n ∑_i = 1^n 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤ n /p (log n)^2∑_j = 1^p ℙ (_i, j∉ H_j, n) = p n / p (log n )^2 · 4 M n^-1log n = 4 M (log n )^-1→ 0. As for the second term in the upper bound in (<ref>) above, an application of the Markov inequality and the fact that F_j(_i, j ) follows the standard uniform distribution gives that ℙ(max_1 ≤ j ≤ p n^-1∑_i = 1^n | Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ≥p (log n )^3/n) ≤n/p (log n)^3∑_j = 1^p (| Φ^-1 (F_j (_i, j )) |^2 1 (_i, j∉ H_j, n) ) = 2 n/ (log n)^3∫_-∞^Φ^-1 (2Mlog n/n ) 1/√(2π) u^2 e^-u^2/2 du ≤2n/(log n)^3 |Φ^-1 (2Mlog n/n )| ∫_-∞^Φ^-1 (2Mlog n/n ) 1/√(2π) |u|^3 e^-u^2/2 du ≤ C n/(log n)^3 |Φ^-1 (2Mlog n/n ) | ·| Φ^-1 (2Mlog n/n)|^3 ·Φ(Φ^-1 (2Mlog n/n) ) ≤ C (log n)^-1→ 0, where in the last step above, we have used the facts that |Φ^-1 (M log n/n) | ≤ C √(log n), ∫ u^3 e^-u^2 / 2 du = - (u^2 + 2) e^-u^2/2, and e^-x^2/2/Φ(x) = O(|x|) for x < -2. Combining (<ref>), (<ref>), and (<ref>) yields that with probability 1 - o(1), E_2≤p (log n )^3/n. Next we proceed with studying term E_1. First, note that when |Φ^-1 (y )| > 2, it holds that [ Φ^-1 (y) ]' = 1/Φ'( Φ^-1 (y) )≤ C 1/(y (1 - y)) |Φ^-1 (y)| due to the fact that Φ'(x) /(1 - Φ(x) ) ≥ C x for x > 2 and Φ'(x) / Φ(x) ≥ C |x| for x < -2. When |Φ^-1(y)| ≤ 2, it is easy to see that [ Φ^-1 (y) ]' = 1/Φ'( Φ^-1 (y) )≤ C. Thus, combining the previous two results shows that for y ∈ℝ, [ Φ^-1 (y) ]' ≤C/(y (1 - y)) |Φ^-1 (y)| ≤C/(y (1 - y)) . Let us define an interval δ_j(x) = [F_j(x) - √(M [F_j(x) (1 - F_j(x)) ] log n/n), F_j(x) + √(M [F_j(x) (1 - F_j(x)) ]log n/n)]. Observe that under assumption (<ref>), we have that ℙ ( E_1≥ x) ≤ℙ( max_1 ≤ j ≤ p n^-1 (M log n/n) ∑_i = 1^n (sup_y ∈δ_j(_i, j ) [Φ^-1 (y)]' )^2 F_j (_i, j ) (1 - F_j (_i, j ) ·1 (_i, j∈ H_j, n) ≥ x) + o(1). When _i, j∈ H_j, n, it holds that F_j(_i, j ) ∈ [2 M n^-1log n, 1 - 2 M n^-1log n] and hence sup_y ∈δ(_i, j ) | y/F (_i, j) - 1 | ≤√(M log n/n F_j(_i, j ))≤ 1/√(2). Similarly, we have that sup_y ∈δ(_i, j ) | 1 - y/1 - F (_i, j) - 1 | ≤ 1/√(2). The above two bounds combined with (<ref>) yields that for _i, j∈ H_j, n, sup_y ∈δ_j(_i, j ) [Φ^-1 (y)]' ≤sup_y ∈δ_j(_i, j )C / y (1 - y)≤C /F_j(_i, j ) (1 - F_j(_i, j )). In view of the above bound, (<ref>), and the fact that F_j(_i, j ) follows the standard uniform distribution, we can deduce that ℙ ( E_1≥p (log n)^3/n) ≤ℙ(max_1 ≤ j ≤ p n^-1 (M log n/n) ∑_i = 1^n C / F_j(_i, j ) (1 - F_j(_i, j )) 1 (_i, j∈ H_j, n) ≥p (log n)^3/n) + o(1) ≤C M /p (log n)^2 ∑_j = 1^p ( 1 / F_j(_i, j ) (1 - F_j(_i, j )) 1 (_i, j∈ H_j, n) ) = C M / (log n)^2 ∫_2 M n^-1log n^1 - 2 M n^-1log n1/u(1 - u) du ≤C M / (log n)^2 · C log n ≤C M /log n→ 0. A combination of (<ref>), (<ref>), (<ref>), and (<ref>) shows that with probability 1 - o(1), max_J: |J| ≤ρ_n +1 n^-1 (_J - _J)^⊤ (_J - _J) _2 ≤C p ρ_n (log n )^3 /n, which together with (<ref>)–(<ref>) entails that with probability 1 - o(1), n^-1max_1 ≤ j ≤ p_j - _ j_2^2 ≤ C ( ρ_n^2 log p/n + p ρ_n (log n )^3 /n) and (log n) n^-3/2max_1 ≤ j ≤ p_j - _ j_2 ≤ C (log n) n^-1( ρ_n log p/n + √( p ρ_n (log n )^3 /n)). Plugging (<ref>) into (<ref>), it follows that P_1 → 0. Therefore, substituting (<ref>) into (<ref>) derives the desired result (<ref>). It remains to establish (<ref>). Proof of (<ref>). Let us define I_n = [2M n^-1log n, 1 - 2M n^-1log n]. It holds that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 ≥2 M p (log n)^2 /n) = ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∈ I_ n) ≥ M p (log n)^2 /n) + ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∉ I_ n) ≥ M p (log n)^2 /n). For the first term on the right-hand side of (<ref>) above, under assumption (<ref>) we have that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∈ I_ n) ≥ M p (log n)^2 /n) ≤ℙ( M log n /n≥ M p (log n)^2 /n) + o(1) = 0 + o(1) → 0. Regarding the second term on the right-hand side of (<ref>) above, observe that | F_j^-1 (Φ(_i, j)) | ≤ b and |F̂_j^-1 (Φ(_i, j)) | ≤ b by the assumption (X_j) ∈ [-b, b]. In addition, Φ(_i, j) follows the standard uniform distribution and thus ℙ (Φ(_i, j) ∉ I_n) = 4 M n^-1log n. Then we can deduce that ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n [F̂_j^-1 (Φ(_i, j)) - F_j^-1 (Φ(_i, j )) ]^2 1 ( Φ(_i, j) ∉ I_1, n) ≥ M p (log n)^2 /n) ≤ℙ( max_1 ≤ j ≤ p n^-1∑_i = 1^n 1 ( Φ(_i, j) ∉ I_ n) ≥ M p (log n)^2 /4 n b^2 ) ≤4 n b^2 / M p (log n)^2 · p ℙ ( Φ(_i, j∉ I_n) ) = 16 b^2 /log n→ 0. Finally, combining (<ref>)–(<ref>) leads to the desired result (<ref>). This concludes the proof of Proposition <ref>. § PROOFS OF SOME KEY LEMMAS §.§ Proof of Lemma <ref> Let g_j (· | _-j) be the conditional density function of X_j | X_-j = _-j for X = (X_1, …, X_p )^⊤d∼ t_ν ( 0, I_p) and h_j(· | _ -j) the conditional density function of X̂_j | X̂_-j = _-j for X̂ = (X̂_1, …, X̂_p)^⊤d∼ N( 0, I_p). Following the definition in <cit.>, we define K̂L̂_j : = ∑_i = 1^n log( g_j (_i, j | _i, -j) · h_j(_i, j | _i, j ) / h_j (_i, j | _i, - j) g_j (_i, j | _i, -j) ), where = (_i, j ) ∈ℝ^n × p consists of i.i.d. rows sampled from t_ν ( 0, I_p) and = (_i, j)∈ℝ^n × p consists of i.i.d. rows sampled from N( 0, I_p). Note that Theorem 1 in <cit.> states that ≤min_≥ 0{q e^ + ℙ(max_j ∈ℋ_0K̂L̂_j > ) }. We claim that if n p /ν (ν + p)≥ C for some constant C> 0, there exists some positive constant α such that ℙ( K̂L̂_j≥ C/4 ) ≥α. Then it holds that for 0 < < C/4, ℙ(max_1 ≤ j ≤ pK̂L̂_j≥) ≥α, and thus we cannot obtain the desired asymptotic FDR control lim sup_(n, p)≤ q via applying Theorem 1 in <cit.>. By contradiction, to allow ℙ(max_1 ≤ j ≤ pK̂L̂_j≥) → 0, we must have that np/ν (ν + p)→ 0 , which is equivalent to ν^2 ≫ n min (n, p). Hence, Lemma <ref> is proved. Now it remains to establish (<ref>). Proof of (<ref>). Note that <cit.> showed that the conditional density g_j(_j | _-j) of the multivariate t-distribution satisfies that g_j(_i, j | _i, -j) ∝( 1 + _i, j ^2/ν + _i, -j_2^2 )^- (ν + p) / 2. It is easy to see that the conditional density h_j (_i, j | _i, -j) of the standard normal distribution satisfies that h_j(_i, j | _i, -j) ∝exp{ - _i, j ^2 / 2 }. Plugging the two expressions above into (<ref>) yields that K̂L̂_j = ∑_i = 1^n [ _i, j ^2/2 - ν + p/2log(1 + _i, j ^2/ν + _i, -j_2^2) - (_i, j^2/2 - ν + p/2log(1 + _i, j^2/ν + _i, -j_2^2) )]. Applying the basic inequality that |log (1 + x) - (x - x^2/2)| ≤ x^3 for each x > 0, we can obtain that K̂L̂_j = R_1, j + R_2, j + R_3, j, where R_1, j = ∑_i = 1^n [ _i, j ^2 (ν + p)/2 (ν + _i, -j_2^2 )( ν + _i, -j_2^2/ν + p - 1 ) - _i, j^2 /2 ( 1 - ν + p/ν + _i, -j_2^2) ], R_2, j = ∑_i = 1^n ν + p/4(_i, j^4/(ν + _i, -j_2^2)^2 - _i, j ^4/(ν + _i, -j_2^2)^2), R_3, j = ∑_i = 1^n ν + p/2(_i, j^6/(ν + _i, -j_2^2)^3 + _i, j ^6/(ν + _i, -j_2^2)^3). We now calculate the mean and variance of K̂L̂_j separately. Observe that _i, jd∼ N(0, 1), (p-1)^-1_i, -j_2^2 d∼ F_p-1, ν, _i, -j√(ν + p/ν + _i, -j_2^2)_i, j, and √(ν + p-1/ν + _i, -j_2^2)_i, jd∼ t_ν + p- 1 as shown in <cit.>. Using the properties of the multivariate t-distribution and F-distribution, some straightforward calculations show that (R_1, j) = n/2[ ν + p/ν + p - 3( ν (ν + p - 3)/(ν - 2)(ν + p) - 1 ) - ( 1 - (ν + 2) (ν + p)/ν (ν + p - 1)) ] = n ( p/ν (ν + p) + O(ν^-2) ), (R_2, j) = 3 n (ν + p)/4 [ 1/(ν + p - 3) (ν + p - 5) - ν + 2/ν (ν + p - 1)(ν + p + 1)] = O (n /ν (ν + p)), and (R_3, j) ≤ C n (ν + p)^-2. Combining (<ref>)–(<ref>) yields that when ν and p are large, (K̂L̂_j) = n p/ν (ν + p) + O(n ν^-2)≥n p/2 ν (ν + p). Next we analyze the variance of K̂L̂_j. Notice that (K̂L̂_j) = ( ( K̂L̂_j - K̂L̂_j )^2 ) ≤ C ∑_i = 1^n {[ _i, j ^2 (ν + p)/2 (ν + _i, -j_2^2 )( ν + _i, -j_2^2/ν + p - 1 ) - _i, j^2 /2 ( 1 - ν + p/ν + _i, -j_2^2) ]^2 } + C ∑_i = 1^n [ (ν + p)^2/16(_i, j^4/(ν + _i, -j_2^2)^2 - _i, j ^4/(ν + _i, -j_2^2)^2)^2 ] ≤Cn p/ν (ν + p), where in the last step above, we have used the facts that ( _i, j ^4 (ν + p)/ (ν + _i, -j_2^2 )^2) ≤ C, [ ( ν + _i, -j_2^2/ν + p - 1 )^2 ] = 2 p /ν (ν + p) + O(ν^-2), [ ( 1 - ν + p /ν + _i, -j_2^2)^2 ] = 2 p /ν (ν + p) + O(ν^-2). In view of the results on the mean and variance of K̂L̂_j shown in (<ref>) and (<ref>) above, we see that if np/ν (ν + p)≥ C for some constant C > 0, (K̂L̂_j ) ≥np/2ν (ν + p)≥ C /2 . Therefore, we can obtain through the one-sided Markov inequality that for a small constant α > 0 (noting that (K̂L̂_j) > 2 α√( (K̂L̂_j)) if α is small), ℙ (K̂L̂_j ≥ C/4) ≥ℙ (K̂L̂_j ≥ (K̂L̂_j )/2 ) ≥ℙ( K̂L̂_j≥ (K̂L̂_j) - α√((K̂L̂_j))) ≥ 1 - (K̂L̂_j )/(K̂L̂_j ) + α^2 (K̂L̂_j ) = α^2/1 + α^2, which establishes (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Recall that G(t) = p_0^-1∑_j ∈ℋ_0ℙ (W_j ≥ t) and G(t) is a decreasing, continuous function. The main idea of the proof is to divide the continuous interval (0, G^-1 (c_1 q a_n/p)] into a diverging number of smaller intervals with end points { t_i }_i = 0^l_n such that t_0≥ t_1≥⋯≥ t_l_n and |G(t_i)/ G(t_i+1) - 1 | → 0 uniformly for 0 ≤ i ≤ l_n as l_n→∞. Then the supreme over the continuous interval (0, G^-1 (c_1 q a_n/p)] can be reduced to the supreme over the set of discrete points {t_i}_i= 0^l_n and hence, we can apply the union bound to establish the desired result. Similar arguments have also been used in <cit.>, <cit.>, and <cit.>. We detail only the proof of (<ref>) here since (<ref>) can be shown in a similar fashion. We start with defining a sequence 0 ≤ z_0 < z_1 < ⋯ < z_l_n = 1 and t_i = G^-1 (z_i), where z_0 = c_1 q a_n/p, z_i = c_1 q a_n/p + h_n e^i ^γ/p, and l_n = [log ((p - c_1 q a_n)/h_n)]^1/γ with 0 < γ < 1 and sequence h_n →∞ satisfying that h_n /a_n → 0. As long as m_n /a_n = o(1), we can choose h_n = a_n/(a_n / m_n)^η for some η∈ (0, 1). Then an application of similar technical analysis as in <cit.> shows that as a_n →∞, sup_0 ≤ i ≤ l_n|G(t_i)/G(t_i+1) - 1 | → 0. For t ∈ (0, G( c_1 q a_n/p)], there exists some 0 ≤ i ≤ l_n - 1 such that t ∈ [t_i+1, t_i]. It follows from the monotonicity of ℙ (W_j ≥ t) and 1 ( Ŵ_j ≥ t) that | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t) / p_0 G(t) - 1 | ≤max{| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i) - 1 |, | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i)/p_0 G(t_i+1) - 1 | } . The two terms within the brackets on the right-hand side of the expression above can be bounded similarly and we will provide only the details on how to bound the first term for simplicity. With the aid of the fact that | x y - 1 | ≤ | x -1| |y - 1| + |x - 1| + |y -1| for all x, y ∈ℝ, we can deduce that | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i) - 1 | ≤| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 | ·sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 | + | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 | + sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 | ≤| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 |·(1+o(1)) + sup_0 ≤ i ≤ l_n| G(t_i)/G(t_i+1) -1 |, where the last step above is because of (<ref>) and the o(1) term is uniformly over all i. Combining the above two results and applying (<ref>) again lead to | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t) / p_0 G(t) - 1 | ≤max{| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i+1)/p_0 G(t_i+1) - 1 |, | ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i)/p_0 G(t_i) - 1 | } ×(1 + o(1) ) + o(1). Thus, to prove the desired result, it is sufficient to show that D_n := sup_0 ≤ i ≤ l_n| ∑_j ∈ℋ_0 1 (Ŵ_j ≥ t_i) / p_0 G(t_i) - 1 |=o_p(1). We now proceed with establishing (<ref>). Let us define an event ℬ_3 = {max_1 ≤ j ≤ p | Ŵ_j - W_j | ≤ b_n }. From Condition <ref>, it holds that ℙ (ℬ_3^c) → 0. Note that for any two events A and B, we have that ℙ(A) ≤ℙ(A∩ B) + P(B^c). Repeatedly using such inequality, the union bound, and the property that ℙ (ℬ_3^c) → 0, we can deduce that for each ϵ > 0, ℙ ( D_n ≥ϵ ) ≤∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 {1 (Ŵ_j ≥ t_i) - ℙ ( W_i ≥ t_i ) }/ p_0 G(t_i) | ≥ϵ, ℬ_3 ) + ℙ (ℬ_3^c) ≤∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 {1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) }/ p_0 G(t_i) | ≥ϵ /2 ) + ∑_i = 0^l_nℙ( | ∑_j ∈ℋ_0 [ 1 (Ŵ_j ≥ t_i) - 1 ( W_i ≥ t_i ) ] / p_0 G(t_i) | ≥ϵ /2, ℬ_3) + o(1) ≤∑_i = 0^l_n 4 [ {∑_j ∈ℋ_0 [ 1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) ] }^2 ] /ϵ^2 p_0^2 G^2 (t_i) + ∑_i = 0^l_n 2 ∑_j ∈ℋ_0ℙ( t_i - b_n ≤W_j ≤ t_i + b_n ) /ϵ p_0 G(t_i) + o(1), where the last step above is due to the Markov inequality and the fact that |1 (Ŵ_j ≥ t_i) - 1 ( W_i ≥ t_i )|≤1 (t_i-b_n≤W_j ≤ t_i+b_n) on event ℬ_3. We next bound the first two terms on the very right-hand side of (<ref>) above. For the first term, under Condition <ref> for the weak dependence between {W_j}, we have that ∑_i = 0^l_n 4 [ {∑_j ∈ℋ_0 [ 1 (W_j ≥ t_i) - ℙ ( W_i ≥ t_i ) ] }^2 ] /ϵ^2 p_0^2 G^2 (t_i) ≤ C ∑_i = 0^l_n m_n p_0 G(t_i) + o( (log p)^-1/γ [p_0 G(t_i)]^2 ) /ϵ^2 p_0^2 G^2 (t_i) = C ϵ^-2 m_n ∑_i = 0^l_n1/p_0 G(t_i) + C ϵ^-2 o (l_n (log p)^- 1/γ). Moreover, it holds that ∑_i = 0^l_n 1 / p_0 G (t_i) = p_0^-1∑_i = 0^l_n 1 / z_i = p/p_0∑_i = 0^l_n1/ c_1 q a_n + h_n e^i ^γ ≤ C h_n^-1, where the last inequality above is related to the proof of Theorem 3 in <cit.>. In light of the definition of h_n and the assumption of m_n / a_n → 0, we have that m_n / h_n = (m_n / a_n)^1 - η→ 0. Therefore, combining (<ref>)–(<ref>) and the fact that l_n = [log ((p - c_1 q a_n)/h_n)]^1/γ≤ (log p)^1/γ shows that the first term for the bound in (<ref>) tends to zero as n →∞. Moreover, since l_n ≤ (log p)^1/γ, the second term on the very right-hand side of (<ref>) above is bounded by 2/ϵ (log p)^1/γsup_t ∈ (0, G^-1 (c_1 q a_n/p) ] G(t - b_n ) - G(t + b_n) / G(t) , which converges to zero as n →∞ under Condition <ref>. Finally, we can obtain that for each ϵ > 0, ℙ ( D_n > ϵ ) → 0, which establishes the desired result in (<ref>). This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> We will show that with asymptotic probability one, it holds that for some 0 < c_1 < 1, 1 + ∑_j = 1^p 1( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤ q a_n ≤ q ∑_j = 1^p 1( Ŵ_j ≥ G^-1 ( c_1 q a_n/ p ) ). Then from the definition of T, we can obtain the desired result of the lemma. We aim to establish (<ref>). The main idea of the proof is to prove that the population counterpart of (<ref>) holds. Then with an application of Lemma <ref> to both left- and right-hand sides of (<ref>), we can connect it to the population counterpart and thus prove that (<ref>) holds with asymptotic probability one. First, it follows from the union bound and the fact that ℙ(A) ≤ℙ(A∩ B) + ℙ(B^c) for any two events A and B that under Conditions <ref>–<ref>, ℙ ( Ŵ_j < 3 δ_n   j ∈𝒜_n) ≤ℙ ( Ŵ_j < 3 δ_n   j ∈𝒜_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤ℙ ( W_j < 3 δ_n + b_n    j ∈𝒜_n) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤∑_j ∈𝒜_n ℙ ( W_j - w_j < 3 δ_n + b_n - w_j ) + o(1) ≤∑_j ∈𝒜_n ℙ ( | W_j - w_j | > δ_n )+ o(1) ≤∑_j = 1 ^p ℙ ( | W_j - w_j | > δ_n )+ o(1) → 0 . Then we have ℙ (∩_j ∈𝒜_n{Ŵ_j ≥ 3 δ_n}) → 1 and thus with asymptotic probability one, ∑_j = 1^p 1 ( Ŵ_j ≥ 3 δ_ n ) ≥ a_n , where a_n = |𝒜_n|. In addition, since w_j > - δ_n for 1 ≤ j ≤ p by assumption, we can deduce that ∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n) ≤∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n ) + ℙ (max_1 ≤ j ≤ p |Ŵ_j - W_j | ≥ b_n ) ≤∑_j = 1^p ℙ ( W_j < - 3 δ_n + b_n ) + o(1) ≤∑_j = 1^p ℙ (W_j - w_j ≤ - 3 δ_n + b_n - w_j ) + o(1) ≤∑_j = 1^p ℙ ( | W_j - w_j | > δ_n ) + o(1) → 0 , which yields ∑_j = 1^p ℙ ( Ŵ_j < - 3 δ_n)→ 0. Using similar arguments as for (<ref>), it holds that ∑_j = 1^p ℙ (W_j ≤ - 3 δ_n ) → 0. Then we can obtain that G( 3 δ_n ) = p_0^-1∑_j ∈ℋ_0ℙ ( W_j ≤ -3 δ_n ) ≤ p_0^-1∑_j = 1^p ℙ ( W_j ≤ -3 δ_n ) = o(p_0^-1) . Since a_n →∞, p_0 / p → 1, and G(t) is a nonincreasing, continuous function, it follows that G(3 δ_n) ≤ c_1 q a_n / p and thus G^-1 ( c_1 q a_n/ p ) ≤ 3 δ_n for some constant 0 < c_1 < 1 when n is sufficiently large. This together with (<ref>) entails that with asymptotic probability one, ∑_j = 1^p 1 ( Ŵ_j ≥ G^-1 ( c_1 q a_n/ p ) ) ≥ a_n. This completes the proof of the second inequality in (<ref>). It remains to establish the first inequality in (<ref>). From the definition of G(t) and Lemma <ref>, it holds that c_1 q a_n / p = p_0^-1∑_j ∈ℋ_0ℙ ( W_j ≤ - G^-1 ( c_1 q a_n/ p ) ) = (1 + o_p(1)) · p_0^-1∑_j ∈ℋ_01( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ). Then for some constant c_2 satisfying 0 < c_1 < c_2 < 1, we can obtain that with asymptotic probability one, 1 + ∑_j ∈ℋ_01( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤c_1 q a_n p_0/p (1 + o_p(1)) ≤ c_2 q a_n , where we have used the assumption of p_0/p → 1. Further, under (<ref>) in Condition <ref>, an application of the union bound yields that ℙ( ∑_j ∈ℋ_11( Ŵ_j < - G^-1 (c_1 q a_n/p) ) ≥ (1 - c_2) q a_n ) ≤ℙ( ∑_j ∈ℋ_11( W_j < - G^-1 (c_1 q a_n/p ) + b_n ) ≥ (1 - c_2) q a_n, max_1 ≤ j ≤ p |Ŵ_j - W_j | < b_n ) + o(1) ≤1/ (1 - c_2 ) q a_n∑_j ∈ℋ_1ℙ( W_j < - G^-1 (c_1 q a_n/p) + b_n ) + o(1) → 0, which together with (<ref>) implies that 1 + ∑_j = 1^p 1( Ŵ_j < - G^-1 ( c_1 q a_n/ p ) ) ≤ q a_n with asymptotic probability one. This proves the first inequality in (<ref>), which completes the proof of Lemma <ref>. New proof sketch for lower bound (need to revise to formalize). Now we have proved that with asymptotic probability 1, T∈ (0, G^-1 ( c_1 q a_n/ p )). Denote by this event ℬ_1. We will establish the lower bound now. By definition, we have T∈𝒮 with asymptotic probability one, where 𝒮 := {t∈ (0, G^-1 ( c_1 q a_n/ p )) : ∑_j = 1^p 1( Ŵ_j ≤ -t )/1⋁∑_j = 1^p 1( Ŵ_j ≥ t )≤ q}. Note that for any t∈𝒮, we have ∑_j ∈ℋ_0 1 ( Ŵ_j ≤ -t ) + ∑_j∈ℋ_1 1 ( Ŵ_j ≤ -t )≤ q+ q ∑_j ∈ℋ_0 1 ( Ŵ_j ≥ t ) + q∑_j∈ℋ_1 1 ( Ŵ_j ≥ t ). Denote by the event where the inequalities in Lemma 2 hold as ℬ_2ϵ. Then on ℬ_2ϵ, (1-ϵ)∑_j ∈ℋ_0ℙ( W_j ≤ -t ) ≤ q+qs + (1+ϵ)q∑_j ∈ℋ_0ℙ( W_j ≥ t ). That is, ∑_j ∈ℋ_0ℙ( W_j ≤ -t ) ≤q(1+s)/1-q-ϵ-qϵ, which yields t≥ G^-1(q(1+s)/(1-q-ϵ-qϵ)p). That is, on event ℬ_2ϵ∩ℬ, it holds that G^-1(q(1+s)/(1-q-ϵ-qϵ)p)≤ T≤ G^-1 ( c_1 q a_n/ p ). §.§ Proof of Lemma <ref> The proof of this lemma relies on the definitions of T_v and T_v, with the intuition that T_v resembles the vth order statistic of - W_j, while T_v resembles the vth order statistic of Ŵ_j. Intuitively, this means that if the distance between W_j and Ŵ_j is bounded by b_n, the distance between the corresponding order statistics should also be bounded by b_n. We will formalize such argument next. Let us define an event 𝒞 := {max_1 ≤ j ≤ p | Ŵ_j - W_j| ≤ b_n} . Condition <ref> assumes that ℙ (𝒞) → 1. Denote by Ŝ_v = { 1 ≤ j ≤ p : - Ŵ_j ≥T_v } and S_v = { 1 ≤ j ≤ p: - W_j ≥T_v } . Observe that | Ŝ_v | = v and | S_v | = v by the definitions of T_v and T_v. If j_0 ∈Ŝ_v, on event 𝒞 we have that - W_j_0 = - Ŵ_j_0 + ( Ŵ_j_0 - W_j_0 ) ≥T_v - b_n, which entails that ∑_j = 1^p 1 ( - W_j ≥T_n - b_n ) ≥ v. Moreover, since T_v satisfies ∑_j = 1^p 1( - W_j ≥T_v ) = v, it follows that T_v ≥T_v - b_n by the monotonicity of the indicator function. Similarly, we can also show that T_v ≥T_v - b_n on event 𝒞. Thus, (<ref>) is derived. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Note that k is the number of failures before v successes in a binomial process with success probability 1/2. The major intuition of the desired result (<ref>) is that by the law of large numbers, the number of failures and successes should become asymptotically comparable as the number of trials tends to infinity. Let D_k + v - 1 be a binomial random variable with distribution B ( k + v - 1, 1/2 ) and L_v the negative binomial random variable with distribution NB(v, 1/2 ). Observe that (<ref>) is equivalent to ℙ ( L_v ≥ k ) ≤ q. According to the relationship between the negative binomial distribution and binomial distribution, we have that ℙ ( L_v ≥ k ) = 1 - ℙ ( L_v ≤ k - 1 ) = 1 - ℙ ( D_k + v - 1 ≥ v ) = ℙ ( D_k + v - 1 ≤ v - 1 ). By the central limit theorem, it holds that when k + v →∞, ℙ ( D_k + v - 1 ≤ v - 1 ) = Φ( v - 1 - k /√( k + v - 1 )) + o(1). Therefore, (<ref>) implies that v - 1 - k /√( k + v - 1 )≤Φ ^-1 (q - o(1) ). In addition, since v is the largest integer such that (<ref>) holds, we have that ℙ ( L_v +1≥ k) > q . Using similar arguments as for (<ref>), it follows that as k + v →∞, ℙ ( L_v + 1 ≥ k ) = ℙ ( D_k + v≤ v ) = Φ( v - k /√( k + v )) + o(1) and hence v - k /√(k + v )≥Φ^-1 ( q - o(1) ), which along with (<ref>) leads to (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The proof of this lemma consists of two steps. We will first establish the tight bounds below for T_v. In the second step, noting that T_v + M_v + 1 < T_v - 2 b_n ≤T_v + M_v by the definition of M_v in (<ref>), we will show that M_v is bounded as long as b_n is sufficiently small. For 0< < 1/8, under Conditions <ref>, <ref>, and <ref> we have that ℙ( G^-1( v(1+)/p_0) < T_v < G^-1(v(1 - )/p_0) ) → 1. Under Condition <ref>, we have that 2 b_n < G^-1( v(1 + )/p_0) - G^-1( v (1 + 3 ) (1 - ) /p_0). Using similar arguments as in the proof of Lemma <ref> below, we can show that under Conditions <ref>, <ref>, and <ref>, ℙ( G^-1( v (1 + 3 )(1 + )/p_0) < T_v (1 + 3 ) < G^-1((v (1 + 3 ))(1 - )/p_0) ) → 1. Then it follows that T_v (1 + 3 ) < G^-1 ( v (1 + 3 ) (1 - ) /p_0) < G^-1 ( v (1 + )/p_0 ) < T_v . Additionally, applying Lemmas <ref> and <ref> together with the definition of T_v gives that with asymptotic probability one, T_v + M_v ≥T_v - 2 b_n ≥ G^-1 ( v(1 + )/p_0 ) - [ G^-1 ( v(1 + )/p_0 ) - G^-1 ( v (1 + 3 ) (1 - )/p_0 ) ] = G^-1 ( v (1 + 3 ) (1 - )/p_0 ) > T_v (1 + 3 ). Therefore, we can obtain that ℙ (M_v < 3 v ) → 1 since T_v is decreasing with respect to v. This will conclude the proof of Lemma <ref>. We will present the formal proofs of Lemmas <ref> and <ref> below. Proof of Lemma <ref>. The main idea of the proof is to establish the convergence of the empirical distribution of {W_j} that ∑_j ∈ℋ_01(W_j ≥ t) is close to ∑_j ∈ℋ_0ℙ(W_j ≥ t). Using similar arguments as in the proof of Lemma <ref> in Section <ref>, we can obtain that when m_n/k → 0 (which combined with Lemma <ref> implies that m_n / v → 0), sup_t ∈ (G^-1(3k/2p), G^-1(k/2p) ) | ∑_j ∈ℋ_01(W_j ≤ - t) /∑_j ∈ℋ_0ℙ(W_j ≤ - t) - 1 | = o_p(1). Since ∑_j ∈ℋ_0ℙ(W_j ≤ - G^-1(v (1 + ) /p_0) = v (1 + ), we see from (<ref>) that ∑_j=1^p 1(-W_j ≥ G^-1(v (1 + ) /p_0)) ≥∑_j ∈ℋ_01(W_j ≤ - G^-1(v (1 + ) /p_0)) = v (1 + ) (1 + o_p(1)) > v holds with asymptotic probability one. Hence, from the definition of T_v, we have that ℙ(T_v > G^-1 ( v (1 + )/p_0 ) ) → 1. We next prove the upper bound for T_v. Note that ∑_j =1^p 1 (W_j ≤ - T_v) = v. We will aim to show that with asymptotic probability one, ∑_j ∈ℋ_11 ( W_j ≤ - T_v ) < v / 2. Then with asymptotic probability one, it holds that ∑_j ∈ℋ_01 (W_j ≤ - T_v) ≥ v (1 - /2). On the other hand, applying (<ref>) and similar argument as for (<ref>), we can obtain that with asymptotic probability one, ∑_j ∈ℋ_01(W_j ≤ - G^-1(v (1 - ϵ_n) /p_0) < v (1 - /2) . Combining the above two results shows that with asymptotic probability one, T_v≤ G^-1(v (1 - ) /p_0), which completes the proof for the upper bound. It remains to establish (<ref>). Since p_0/ p → 1 and v/k → 1 (cf. Lemma <ref>), we have that G^-1 (3 k /2 p) < G^-1 ( v (1 + )/p_0 ) when n and p are sufficiently large and 0 < < 1/8. Then from (<ref>), it holds that G^-1 (3 k /2 p) ≤T_v and hence with asymptotic probability one, ∑_j ∈ℋ_11 (W_j ≤ - T_v) ≤∑_j ∈ℋ_11 (W_j < - G^-1 (3 k /2 p)) . Moreover, an application of the Markov inequality, Lemma <ref>, and (<ref>) in Condition <ref> yields that as n →∞, ℙ(∑_j ∈ℋ_11 (W_j < - G^-1 (3 k /2 p)) > v /2 ) ≤2/ v ∑_j ∈ℋ_1ℙ(W_j < - G^-1 (3 k /2 p) ) → 0. Therefore, (<ref>) is derived in view of (<ref>). This completes the proof of Lemma <ref>. Proof of Lemma <ref>. Let us observe that v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0 = v/p_0 ( - 3 ^2). By the assumptions that p_0/p→ 1 and m_n/k→ 0, and applying Lemma <ref> and the observation above, it follows that when k and p are sufficiently large, v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0≥ k / 2 p . Note that assumption (<ref>) in Condition <ref> entails that sup_t ∈ ( G^-1 (3k/2p), G^-1(k/2p)) [ G(t - b_n ) - G(t + b_n) ] = o( k /p ). Combining the above two results and Lemma <ref>, we can obtain that v (1 + 3 ) (1 - )/p_0 - v(1 + )/p_0≫sup_t ∈ ( G^-1 (3k/2p), G^-1(k/2p)) [ G(t - b_n ) - G(t + b_n) ]. Notice that G^-1(v (1 + 3 ) (1 - )/p_0 ) ∈ ( G^-1 (3k/2p), G^-1(k/2p)) and G^-1 (v(1 + )/p_0 ) ∈ ( G^-1 (3k/2p), G^-1(k/2p)) when k and p are sufficiently large. Therefore, using proof by contradiction and the monotonicity of function G(·), we can establish the desired result of Lemma <ref>. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Recall that the perfect and approximate knockoff statistics based on the marginal correlation are defined as W_j = (√(n)_2)^-1 ( | _j^⊤ | - |_j^⊤| ) and Ŵ_j = (√(n)_2)^-1 ( | _j^⊤ | - |_j^⊤| ), respectively. By the triangle inequality, it is easy to see that max_1 ≤ j ≤ p | Ŵ_ j - W_ j | ≤max_1 ≤ j ≤ p (√(n)_2)^-1 | (_j - _j)^⊤ |. Then an application of the Cauchy–Schwarz inequality gives that max_1 ≤ j ≤ p | Ŵ_ j - W_ j | ≤ (√(n))^-1max_1 ≤ j ≤ p _j - _j _2 . Thus, the conclusion of Lemma <ref> can be derived under Condition <ref>. This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> From the definitions of W_j and w_j and the triangle inequality, it holds that ℙ (| W_j - w_j | ≥δ_n ) ≤ℙ( ( n^-1^2_2 )^-1/2| n^-1 ( | _j^⊤ | - | _j^⊤ | ) - ( | (X_j Y)| - |(X_j Y)| ) | ≥δ_n / 2 ) + ℙ( | ( n^-1^2_2 )^-1/2 - ( Y^2)^-1/2| ·| | (X_j Y)| - |(X_j Y)| | ≥δ_n / 2 ) := P_1 + P_2. We will aim to show that for δ_n → 0, P_1 ≤ 4 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 } + exp{ - n ( Y^2)^2 /8 Y^4 } and P_2 ≤ 2 exp{ - n δ_n^2 ( Y^2 )^2 / 64 | w_j |^2 Y _ψ_2^4 } + exp{ - n ( Y^2 )^2 /8 Y^4}. Then setting δ_n = √(log p/n)max_1 ≤ j ≤ p{ 16 √(2) X_j _ψ_2 Y _ψ_2/ ( Y^2)^1/2 8√(2) |w_j| Y _ψ_2^2 / Y^2 }, a combination of the above results leads to the desired conclusion of this lemma. We proceed with proving (<ref>). Since _2^2 = ∑_i = 1^n y_i^2 is the sum of i.i.d. random variables, an application of Bernstein’s inequality yields that ℙ ( n^-1_2^2 ≤[Y^2] /2 ) ≤exp{ - n ( Y^2)^2 /8 Y^4 }. It follows from the triangle inequality and (<ref>) that P_1 ≤ℙ( | n^-1 ( | _j^⊤ | - | _j^⊤ | ) - ( | (X_j Y)| - |(X_j Y)| ) | ≥δ_n ( Y^2)^1/2/2 √(2)) + ℙ ( n^1/2(_2)^-1≥√(2) ([Y^2])^-1/2 ) ≤ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) + ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) + exp{ - n ( Y^2)^2 /8 Y^4 }. We next bound the first two terms on the right-hand side of the expression above. Under Condition <ref>, we see that _i, j y_i and _i, j y_i are both sub-exponential random variables, with sub-exponential norms X_j _ψ_2 Y _ψ_2 and X_j _ψ_2 Y _ψ_2, respectively. Then we can obtain through applying Bernstein's inequality for sub-exponential random variables (see, e.g., Corollary 2.8.3 in <cit.>) that when δ_n = o(1), ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) ≤ 2 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 } and ℙ( 1/n| ∑_i = 1^n [_i, j y_i - (X_j Y)] | ≥δ_n ( Y^2)^1/2/4 √(2)) ≤ 2 exp{ - n δ_n^2 Y^2 / 256 X_j_ψ_2^2 Y _ψ_2^2 }. Thus, combining the above three inequalities establishes (<ref>). As for term P_2, noting that w_j = ( Y^2)^-1/2 ( | (X_j Y)| - |(X_j Y)| ) and | ( n^-1^2_2 )^-1/2 - ( Y^2)^-1/2| = | n^-1_2^2 - Y^2 | / n^-1/2_2 ( Y^2)^1/2 (( Y^2)^1/2 + n^-1/2_2 ) , we can deduce that P_2 = ℙ( |w_j| | n^-1_2^2 - Y^2 | / n^-1/2_2 (( Y^2)^1/2 + n^-1/2_2 ) ≥δ_n / 2 ) ≤ℙ( |w_j| | n^-1_2^2 - Y^2 |/ n^-1/2_2 ( Y^2)^1/2≥δ_n / 2 ) = ℙ( | n^-1_2^2 - Y^2 | ≥δ_n Y^2 /2 √(2) |w_j| ) + ℙ ( n^-1_2^2 ≤ Y^2 /2 ) . The very last term above can be bounded by applying (<ref>). Again we can see that under Condition <ref>, y_i^2 is a sub-exponential random variable with sub-exponential norm Y _ψ_2^2. With the aid of Bernstein's inequality for sub-exponential random variables (Corollary 2.8.3 in <cit.>), we can obtain that for δ_n = o(1), ℙ( 1/n| ∑_i = 1^n [ y_i^2 - ( Y^2 )] | ≥δ_n Y^2 /2 √(2) |w_j| ) ≤ 2 exp{ - n δ_n^2 ( Y^2 )^2 / 64 | w_j |^2 Y _ψ_2^4 }. Therefore, the bound for term P_2 in (<ref>) can be shown. This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The main idea of the proof is to apply the law of total variance and decompose the total into two terms by conditioning on (_ℋ_1, ), where _ℋ_1= (_j)_j ∈ℋ_1 and = (ε_1, …, ε_n)^⊤. Specifically, it holds that ( ∑_j ∈ℋ_01 (W_j ≥ t) ) = {[ ( ∑_j ∈ℋ_01 (W_j ≥ t) - ∑_j ∈ℋ_0ℙ( W_j ≥ t | _ℋ_1, ) )^2 | _ℋ_1, ] } + {(∑_j ∈ℋ_0ℙ( W_j ≥ t | _ℋ_1, ) - ∑_j ∈ℋ_0ℙ( W_j ≥ t) )^2 } := V_1 + V_2. We will bound terms V_1 and V_2 above separately. Let us begin with the first term V_1. We can expand the square and obtain that V_1 = ∑_j ∈ℋ_0∑_ℓ∈ℋ_0{[ ( 1 (W_j ≥ t) - ℙ( W_j ≥ t | _ℋ_1, ) ) ×( 1 (W_ℓ≥ t) - ℙ( W_ℓ≥ t | _ℋ_1, ) ) | _ℋ_1, ] }. Observe that conditional on (_ℋ_1, ), it follows from model (<ref>) that is deterministic. In addition, W_j depends only on _j and _j besides . Thus, we need only to consider the conditional distribution of (_j, _j, _k, _k) | (_ℋ_1, ). We will aim to show that each W_j depends on at most m_n random variables in {W_k: k∈ℋ_0}. Indeed, it suffices to show that conditional on (_ℋ_1, ), the number of (_k, _k)'s that are dependent on (_j, _j) is at most m_n. Since the rows of (, ) are i.i.d. and are independent of , we need only to consider the distribution of a single row; that is, (X_j, X_j, X_k, X_k) | (X_ℋ_1, ε) d= (X_j, X_j, X_k, X_k) | X_ℋ_1. In view of the multinormal distribution in (<ref>), it follows that the conditional distribution (X_j, X_j, X_k, X_k) | X_ℋ_1 is still normal. We can obtain from the conditional distribution that {( [ X_j; X_j; ], [ X_k; X_k; ]) | X_ℋ_1} = [ _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k; _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k; ]. In particular, (X_j, X_j) and (X_k, X_k) are independent conditional on X_ℋ_1 if and only if _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k=0. Thus, to count the number of dependent pairs of (X_j, X_j) and (X_k, X_k) for j, k ∈ℋ_0, we need only to count the number of nonzero (_j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k)'s. Without loss of generality, let us assume that X = (X_ℋ_1, X_ℋ_0) and = [ _ℋ_1, ℋ_1 _ℋ_1, ℋ_0; _ℋ_0, ℋ_1 _ℋ_0, ℋ_0; ]. Using the formula for block matrix inverse, it holds that ^-1 = [ (^-1)_11 (^-1)_12; (^-1)_21 _ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0; ], where (^-1)_11 = _ℋ_1, ℋ_1^-1 + _ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 (_ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0)^-1_ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1 , (^-1)_12 =- _ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 (_ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0)^-1 , and (^-1)_21 = (^-1)_12^⊤. In addition, Condition <ref> assumes that max_1 ≤ j ≤ p (^-1)_j_0 ≤ m_n, which indicates that max_j ∈ℋ_0 ( _ℋ_0, ℋ_0 - _ℋ_0, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, ℋ_0 )_j _0≤ m_n since it is a submatrix of ^-1. Hence, we can obtain that for a given j ∈ℋ_0, ∑_k ∈ℋ_01( _j, k - _j, ℋ_1_ℋ_1, ℋ_1^-1_ℋ_1, k = 0 ) ≤ m_n. Consequently, we see that conditional on (_ℋ_1, ), the number of k ∈ℋ_0 such that (_k, _k) is dependent on (_j, _j) is at most m_n. For j ∈ℋ_0, let us define N(j) := { k∈ℋ_0: W_k W_j | (_ℋ_1 , ) }. Then it holds that | N(j) | ≤ m_n. From (<ref>) and the fact that the indicator function takes values between 0 and 1, we can deduce that V_1 = ∑_j ∈ℋ_0∑_ℓ∈ N(j){[ 1 (W_j ≥ t) ·1 (W_ℓ≥ t) | _ℋ_1, ] } - ∑_j ∈ℋ_0∑_ℓ∈ N(j){[ ℙ( W_j ≥ t | _ℋ_1, ) ℙ( W_ℓ≥ t | _ℋ_1, ) ] } ≤∑_j ∈ℋ_0∑_ℓ∈ N(j){[ 1 (W_j ≥ t) ·1 (W_ℓ≥ t) | _ℋ_1, ] } ≤ m_n ∑_j ∈ℋ_0{[ 1 (W_j ≥ t) | _ℋ_1, ] } = m_n ∑_j ∈ℋ_0ℙ (W_j ≥ t ) = m_n p_0 G(t). We next proceed with showing the bound for term V_2. We can expand V_2 as V_2 = ∑_j ∈ℋ_0∑_ℓ∈ℋ_0{( ℙ ( W_j ≥ t | _ℋ_1, ) - ℙ ( W_j ≥ t ) ) ×( ℙ ( W_ℓ≥ t | _ℋ_1, ) - ℙ ( W_ℓ≥ t ) ) }. The key idea of the proof is to examine the conditional distribution ℙ (W_j ≥ t | _ℋ_1, ) and show that given j ∈ℋ_0, the number of dependent ℙ ( W_ℓ≥ t | _ℋ_1, ) is at most m_n. Since (X, X ) is multinormal, it holds that (X_j, X_j) | ( X_ℋ_1, )  d∼ N ( [ _j, ℋ_1_ℋ_1, ℋ_1 ^-1 X_ℋ_1; _j, ℋ_1_ℋ_1, ℋ_1 ^-1 X_ℋ_1; ], _cond), where _cond = [ _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j _j, j - r - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j; _j, j - r - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j; ]. Since the rows of the augmented data matrix (, ) are i.i.d. and is deterministic given ( _ℋ_1, ), we can obtain that ( _j^⊤/√(n)_2, _j^⊤/√(n)_2) | ( _ℋ_1, ) d∼ N ((√(n)_2)^-1[ _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1^⊤; _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1 ^⊤; ], n^-1_cond). Note that when _ℋ_1, j = 0, the conditional distribution above does not depend on (_ℋ_1, ) and hence any term involving such j ∈ℋ_0 in the expansion of V_2 will disappear. Denote by N_dep = {j ∈ℋ_0: _ℋ_1, j≠ 0}. It follows from Condition <ref> that | N_dep | ≤ m_n. Then we have that V_2 = ∑_j ∈ℋ_0∑_ℓ∈ N_dep{( ℙ ( W_j ≥ t | _ℋ_1, ) - ℙ ( W_j ≥ t ) ) ×( ℙ ( W_ℓ≥ t | _ℋ_1, ) - ℙ ( W_ℓ≥ t ) ) } ≤∑_j ∈ℋ_0∑_ℓ∈ N_dep{ℙ ( W_j ≥ t | _ℋ_1, ) ℙ ( W_ℓ≥ t | _ℋ_1, ) } ≤∑_j ∈ℋ_0∑_ℓ∈ N_dep{ℙ ( W_j ≥ t | _ℋ_1, ) }≤ m_n p_0 G(t). Therefore, substituting (<ref>) and (<ref>) into (<ref>) yields (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Proof of (<ref>). In the proof of Lemma <ref> in Section <ref> (cf. (<ref>)), we have shown that ( _j^⊤/_2, _j^⊤/_2) | ( _ℋ_1, )  d∼ N ( [ μ_j; μ_j; ], σ_j^2 [ 1 ρ_j; ρ_j 1; ]), where μ_j = _2^-1_j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1^⊤, σ_j^2 = _j, j - _j, ℋ_1_ℋ_1, ℋ_1 ^-1_ℋ_1, j, ρ_j = 1 - r / σ_j^2, and r is as given in (<ref>). Recall the definition N_dep = {j ∈ℋ_0: _ℋ_1, j≠ 0} in the proof of Lemma <ref>. It holds that |N_dep| ≤ m_n in view of Condition <ref>. Furthermore, note that G(t) ≥ c_1 q a_n / p for t ∈ (0, G^-1 ( c_1 q a_n / p ) ]. Let us define R_n := sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 ∩ N_dep^cℙ (t - Δ_n ≤W_j < t + Δ_n) /∑_j ∈ℋ_0 ∩ N_dep^cℙ (W_j ≥ t ) . Then we can write sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] G(t - Δ_n) - G(t + Δ_n) / G(t) = sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 ∩ N_depℙ (t - Δ_n ≤W_j < t + Δ_n) / p_0 G(t) + R_n ≤m_n p / c_1 q a_n p_0 + R_n. From the assumptions that (log p)^1/γ m_n / a_n → 0 and p_0 / p → 1, we have that (log p )^1/γm_n p / c_1 q a_n p_0 → 0. It remains to establish (log p)^1/γ R_n → 0. A key observation is that when j ∈ℋ_0 ∩ N_dep^c, it follows that the conditional distribution ( _j^⊤/_2, _j^⊤/_2) | ( _ℋ_1, )  d∼ N ( [ 0; 0; ], [ _j, j^2 _j, j^2 - r; _j, j^2 - r _j, j^2; ]), which does not depend on ( _ℋ_1, ). Then we see that the distribution of W_j does not depend on ( _ℋ_1, ) and satisfies that ℙ ( √(n)W_j ≥ t ) = ℙ ( |Z_1| - |Z_2| ≥ t ), where (Z_1, Z_2)^⊤ is a two-dimensional multinormal random variable with mean (0, 0)^⊤ and covariance matrix [ _j, j^2 _j, j^2 - r; _j, j^2 - r _j, j^2; ] . For j ∈ℋ_0 ∩ N_dep^c and t > 0, the density function of √(n)W_j is given by f_√(n)W_j(t) = √(2)/√(π) c_2, j[ 1 - Φ( t /c_1, j) ] exp{- t^2 /2 c_2, j^2 } + √(2)/√(π) c_1, j[ 1 - Φ( t /c_2, j) ] exp{- t ^2 /2 c_1, j^2 } , where c_1, j = √(4 _j,j^2 - 2 r ) and c_2, j = √(2r). Based on the density function of √(n)W_t above and the basic inequality that 1 - Φ(x) ≤ e^-x^2/2 for x ≥ 0, it is easy to see that ℙ ( W_j ≥ t ) = ℙ (√(n)W_j ≥√(n) t ) ≤∫_√(n) t^∞√(2)/√(π) c_2, jexp{- x^2 /2 c_2, j^2 } dx +∫_√(n) t^∞√(2)/√(π) c_1, jΦ( -x /c_2, j) dx ≤(2 + 2 c_2, j/ c_1, j) [1 - Φ(√(n) t/c_2, j) ]. Then we can obtain that G(t ) ≤max_j ∈ℋ_0(2 + 2 c_2, j/ c_1, j) [1 - Φ(√(n) t/ c_2, j) ] . Setting t = G^-1 ( c_1 q a_n/ p ) in the inequality above yields that G^-1 ( c_1 q a_n/ p ) = O( √(log p/n) ) when C_1 < r < _j, j^2 < C_2 with some absolute constants C_1> 0 and C_2> 0 for each j ∈ℋ_0. We will bound the ratio in R_n by considering two ranges of t∈ (0, 4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j) and t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)] separately. When t falls into the first range, in view of (<ref>) the denominator G(t) in the ratio in R_n is of a constant order, while the numerator is uniformly bounded from above by O(√(n)Δ_n) over all t in this range because the density f_√(n)W_j(t) is bounded from above by a constant. We now consider the ratio in R_n in the second range of t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)]. We will bound the numerator and denominator in (<ref>) separately in this range. It follows from (<ref>) and the mean value theorem that there exists some ξ∈ (√(n) t - √(n)Δ_n, √(n) t + √(n)Δ_n) such that ℙ (√(n) t - √(n)Δ_n ≤√(n)W_j ≤√(n) t + √(n)Δ_n) = 2 √(n)Δ_n {√(2)/√(π) c_2, j[ 1 - Φ( ξ/c_1, j) ] exp{ - ξ^2 / 2 c_2, j^2 } + √(2)/√(π) c_1, jexp{- ξ^2 /2 c_1, j^2}[1 - Φ( ξ/c_2, j) ] }. Moreover, since √(n) t ≤√(n) G^-1 (c_1 q a_n/p) = O( √(log p) ) and Δ_n √(n log p)→ 0, we can obtain through some direct calculations that | 1 - Φ( ξ/c_1, j) / 1 - Φ( √(n) t /c_1, j) - 1 | ≤ C √(n) t ·√(n)Δ_n = O ( Δ_n √(n log p)). Similarly, it holds that | exp{- ξ^2 /2 c_1, j^2}/exp{- (√(n) t)^2 /2 c_1, j^2} - 1 | ≤ C √(n) t ·√(n)Δ_n = O ( Δ_n √(n log p)). Combining the above three inequalities yields that when Δ_n √(n log p)→ 0, ℙ (t - Δ_n ≤W_j < t + Δ_n) = ℙ ( √(n) t - √(n)Δ_n ≤√(n)W_j ≤√(n) t + √(n)Δ_n) ≤ C √(n)Δ_n [1 + O (√(n)Δ_n log p)] {√(2)/√(π) c_2, j[ 1 - Φ( √(n) t /c_1, j) ] exp{- (√(n) t )^2 /2 c_2, j^2 } + √(2)/√(π) c_1, j[ 1 - Φ( √(n) t /c_2, j) ] exp{- (√(n) t ) ^2 /2 c_1, j^2 }}. Next we need to deal with the denominator ℙ (√(n)W_j ≥ t). Via integration by parts, we can deduce that for t ∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p)], ℙ (√(n)W_j ≥√(n) t) = 2 [ 1 - Φ( √(n) t / c_1, j) ] [ 1 - Φ( √(n) t/c_2, j) ] ≥C{ (√(n) t)^-1[ 1 - Φ( √(n) t / c_1, j) ] exp{ - (√(n) t )^2 /2 c_2, j^2 } + (√(n) t)^-1[ 1 - Φ( √(n) t / c_2, j) ] exp{ - (√(n) t )^2 /2 c_1, j^2 }} ≥C (√(n) t)^-1 f_√(n)W_j (√(n) t) , where we have used the definition of the density in (<ref>) and the fact that 1 - Φ(x) ≥ 0.75 x^-1 e^- x^2/ 2 for x ≥ 4, and C is some constant depending on c_1, j and c_2, j. Combining (<ref>) and (<ref>) and using some direct calculations, we can obtain the bound for the ratio in R_n in the second range sup_t∈ [4n^-1/2max_j ∈ℋ_0 c_1, j c_2, j, G^-1(c_1qa_n/p))∑_j ∈ℋ_0 ∩ N_dep^cℙ (t - Δ_n ≤W_j < t + Δ_n) /∑_j ∈ℋ_0 ∩ N_dep^cℙ (W_j ≥ t ) ≤C√(n)Δ_n·√(n) G^-1 ( c_1 q a_n / p ) = O (√(n)Δ_n √(log p)). This together with the result for the first range proved previously leads to R_n = O (√(n)Δ_n √(log p)). Finally, plugging (<ref>) into (<ref>) yields (<ref>) because (log p)^1/γ m_n / a_n → 0 and √(n)Δ_n (log p)^1/2 + 1/γ→ 0. Proof of (<ref>). Recall from Condition <ref> that p_1^-1∑_j ∈ℋ_1ℙ ( W_j < - t ) ≤ G(t) for t ∈ (0, C √(n^-1log p)) with C some large constant. Also, note that Δ_n = o(G^-1 (c_1 q a_n /p)) since √(n)Δ_n → 0 by assumption and G^-1 (c_1 q a_n/p) = O(√(n^-1log p)) as shown in the proof of (<ref>). It follows from some direct calculations that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + Δ_n ) ≤ a_n^-1 (p - p_0) G( G^-1 ( c_1 q a_n / p ) - Δ_n ) = c_1 q (p - p_0) / p + a_n^-1(p - p_0) | G' ( ξ ) |Δ_n, where ξ is some number lying between G^-1 ( c_1 q a_n / p ) and G^-1 ( c_1 q a_n / p ) - Δ_n. From (<ref>) and f_√(n)W_j (√(n)ξ )≤ C with C>0 some constant, we can deduce that | G' (ξ)| = ∑_j ∈ℋ_0 p_0^-1√(n) f_√(n)W_j (√(n)ξ ) ≤C √(n) m_n /p_0 + p_0^-1∑_j ∈ℋ_0 ∩ N_dep^c √(n) f_√(n)W_j (√(n)ξ ) ≤C √(n) m_n /p_0 + C p_0^-1√(n)·√(n) G( c_1 q a_n /p) ∑_H_0 ∩ N_dep^cℙ( W_j ≥ G( c_1 q a_n /p) ) ≤C √(n) m_n /p_0 + C p_0^-1√(n log p ) p_0 c_1 q a_n /p, where the second last step above is due to (<ref>). Therefore, substituting the bound above into (<ref>) gives that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p + Δ_n ) ≤c_1 q (p - p_0)/p + C Δ_n √(n) m_n (p - p_0 )/a_n p_0 + C Δ_n √(n log p ) q (p - p_0)/p → 0, where we have used the assumption that p_0 / p → 1, Δ_n √(n log p )→ 0, and m_n / a_n → 0. This derives (<ref>), which concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The main intuition of the proof is that when the approximate augmented data matrix ^ is close to its perfect counterpart ^, the corresponding Lasso estimators would be close as well. From the definitions of _j in (<ref>) and _j in (<ref>), it holds that max_1 ≤ j ≤ 2p | β_j - β̂_j | ≤max_1 ≤ j ≤ 2p | β_j^ - β̂_j^ | + max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j|. We will aim to prove that for some large enough constant C, ℙ( ^ - ^_2 ≤ C Δ_n s √(log p/n)) → 1, ℙ( max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j|≤ C Δ_n s√(log p/n))→ 1. Then combining the two results above can establish the desired conclusion of Lemma <ref>. We next proceed with proving (<ref>) and (<ref>). Proof of (<ref>). It follows from the Karush–Kuhn–Tucker (KKT) condition that n^-1 [^]^⊤^ (^ - ^ ) = n^-1 [^]^⊤ - λζ, n^-1 [^]^⊤^ (^ - ^ ) = n^-1 [^]^⊤ - λζ̂, where ζ = (ζ_1, …, ζ_2p) and ζ̂ = (ζ̂_1, …, ζ̂_2p) with ζ_j = {[ (β^_j) β_j^≠ 0,; ∈ [-1, 1] β_j^ = 0, ]. ζ̂_j = {[ (β̂_j^) β̂_j^≠ 0,; ∈ [-1, 1] β̂_j^ = 0. ]. Taking the difference between (<ref>) and (<ref>) above leads to n^-1 [^]^⊤^ (^ - ^ ) + n^-1([^]^⊤^ - [^]^⊤^) (^ - ^) = - n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) + n^-1( ^ - ^)^⊤ - λ (ζ - ζ̂). Furthermore, multiplying both sides of the equation above by (^ - ^ )^⊤ yields that n^-1^ (^ - ^ )_2^2 = n^-1( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^) - n^-1 ( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) + n^-1 ( ^ - ^)^⊤( ^ - ^)^⊤ - λ ( ^ - ^)^⊤ (ζ - ζ̂). We claim that the last term on the right-hand side of the expression above satisfies that ( ^ - ^)^⊤ (ζ - ζ̂) ≥ 0 . To understand this, observe that when both β_j^ and β̂_j^ are nonzero or zero, it is easy to see that (β_j^ - β̂_j^) (ζ_j - ζ̂_j ) ≥ 0. When either of β_j^ and β̂_j^ is zero, without loss of generality let us assume that β_j^ = 0 and β̂_j^≠ 0. When β_j^ = 0 and β̂_j^ > 0, it follows that ζ_j ≤ 1 = ζ̂_j and hence (β_j^ - β̂_j^) (ζ_j - ζ̂_j) = - β̂_j^ ((ζ_j - ζ̂_j)) ≥ 0. Similarly, we can show that (β_j^ - β̂_j^) (ζ_j - ζ̂_j) ≥ 0 when β_j^=0 and β̂_j^ < 0. Thus, the last term on the right-hand side of (<ref>) above satisfies that -( ^ - ^)^⊤ (ζ - ζ̂) ≤ 0 . We next examine the three terms on the right-hand side of the earlier expression above separately. First, let us observe that n^-1 [^]^⊤^ - [^]^⊤^_max ≤ n^-1 [^]^⊤(^ - ^) _max+ n^-1 ( ^ - ^)]^⊤^_max ≤max_jn^-1/2_j^_2max_jn^-1/2(_j^ - _j^)_2 + max_jn^-1/2_j^_2max_jn^-1/2(_j^ - _j^)_2. Under Condition <ref> and the sub-Gaussian assumption for , it can be shown that ℙ( n^-1 [^]^⊤^ - [^]^⊤^_max≥ C Δ_n ) → 0 for some constant C > 0. From the sparsity of and in Condition <ref>, we have that with probability 1 - o(1), the first term on the right-hand side of (<ref>) can be bounded as n^-1|( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^)| ≤ C Δ_n s ^ - ^_2^2. By the Cauchy–Schwarz inequality, we can bound the second term on the right-hand side of (<ref>) as |n^-1 ( ^ - ^)^⊤([^]^⊤^ - [^]^⊤^) ( ^ - ^ )| ≤^ - ^_2 n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^ ) _2. Finally, with the aid of Condition <ref> on sparsity and Condition <ref> on the restrictive eigenvalues, the left-hand side of (<ref>) can be lower bounded by c_1^ - ^_2^2. Combining all the results above and applying the Cauchy–Schwarz inequality to the second and third terms on the right-hand side of (<ref>), we can deduce that as Δ_n s → 0, the representation in (<ref>) entails that with probability 1 - o(1), ^ - ^_2 ≲ n^-1([^]^⊤^ - [^]^⊤^) ( ^ - ^) _2 + max_J: |J| ≤ C s n^-1( ^_J - ^_J )^⊤_2 := I_1 + I_2. We will bound the two terms I_1 and I_2 above separately. It follows from (<ref>), the sparsity of and ^, and Lemma <ref> that with probability 1 - o(1), I_1 ≤ C Δ_n s^1/2^ - ^_2 ≤ C Δ_n s √(log p/n). As for term I_2, conditional on (^, ^ ) we have that for each 1 ≤ j ≤ 2p, n^-1/2( ^_j - ^_j )^⊤ d∼ N (0, n^-1^_j - ^_j _2^2 ). Thus, it holds that ℙ( I_2≥ C σΔ_n √(s log n /n )) ≤ℙ( s max_1 ≤ j ≤ 2p ( n^-1/2( ^_j - ^_j )^⊤)^2 ≥ C^2 σ^2 Δ_n^2 s log n ) = ℙ( max_1 ≤ j ≤ 2p n^-1/2^_j - ^_j _2 |Z| ≥ C σΔ_n √(log n)), where Z d∼ N(0, σ^2) is independent of ^ and ^. Moreover, Condition <ref> implies that max_1 ≤ j ≤ 2p^_j - ^_j _2 ≤Δ_n with probability 1 - o(1). Then using the union bound, we can obtain that for some constant C > √(2), ℙ( I_2 ≥ C σΔ_n √(s log n /n )) ≤ℙ( |Z| > C σ√(log n)) + ℙ(max_1 ≤ j ≤ 2p^_j - ^_j _2 ≥Δ_n) → 0. Consequently, substituting (<ref>) and (<ref>) into (<ref>) leads to (<ref>). Further, applying (<ref>) again with the bounds in (<ref>), (<ref>), (<ref>), and (<ref>) yields that ℙ(n^-1/2^ (^ - ^ ) _2 ≤ C Δ_n s √(log p/n)) → 1. Proof of (<ref>). Let us first state three results (<ref>), (<ref>), and (<ref>) below that will be used repeatedly in our proof. With similar arguments as for (<ref>) and (<ref>) and the union bound, we can deduce that under Conditions <ref>–<ref>, ℙ( max_1 ≤ j ≤ 2p _j - _j _2 ≤ C ( m_n^1/2Δ_n + Δ_n m_n √(log p/n)) ≤ C m_n^1/2Δ_n )→ 1, ℙ(n^-1/2max_j _-j^ ( _j - _j )_2 ≤ C Δ_n m_n √(log p/n)) → 1, where we have used √(m_n log p/n)→ 0 for showing (<ref>). Observe that for 1 ≤ j ≤ 2p, n^-1/2 ( _j - _j ) _2 ≤ n^-1/2 ( _j^ - _j^) _2 + n^-1/2_-j^ ( _j - _j ) _2 + n^-1/2 (_-j^ - _-j^) _j _2 + n^-1/2 (_-j^ - _-j^) (_j - _j) _2. Then it follows from the sparsity of 𝒮_j = (_j) ∪(_j) ∪(_j), the sub-Gaussianity of X_j, and the bound in (<ref>) that with probability 1 - o(1), max_1 ≤ j ≤ 2p n^-1/2 ( _j - _j ) _2 ≤ C (Δ_n + Δ_n m_n √(log p/n) + Δ_n m_n^1/2max_1 ≤ j ≤ 2p_j_2 + m_n Δ_n √(log p/n)) ≤ C Δ_n m_n^1/2max_1 ≤ j ≤ 2p_j_2 ≤ C Δ_n m_n^1/2. We are now ready to establish (<ref>). In particular, we have the decomposition for the main term in (<ref>) max_1 ≤ j ≤ 2p | _j^⊤( - ^^) /_j^⊤^_j - _j^⊤( - ^^) /_j^⊤^_j| ≤max_1 ≤ j ≤ 2p | (_j - _j )^⊤( - ^^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤(^^ - ^^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤( - ^^) ( 1/_j^⊤^_j - 1/_j^⊤^_j )| := P_1 + P_2 + P_3. We will investigate the three terms P_1, P_2, and P_3 above separately. Let us first deal with term P_1. Note that P_1 ≤max_1 ≤ j ≤ 2p | (_j - _j )^⊤^( ^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤/_j^⊤^_j|. Since d∼ N( 0, I_n) and is independent of design matrix , it holds that conditional on design matrix , (_j - _j )^⊤/_j^⊤^_jd∼ N ( 0, _j - _j _2^2 / [_j^⊤^_j ]^2 ). This together with the bounds in (<ref>) and (<ref>) leads to ℙ( max_1 ≤ j ≤ 2p | (_j - _j )^⊤/_j^⊤^_j| > C m_n^1/2Δ_n √(log p/n)) = ∑_j = 1^2pℙ( _j - _j _2 / |_j^⊤^_j | · | Z | > C m_n^1/2Δ_n √(log p/n)) ≤∑_j = 1^2pℙ( _j - _j _2 / n · | Z | > C m_n^1/2Δ_n √(log p/n)) + o(1) ≤∑_j = 1^2pℙ (|Z| > C √(log p)) + o(1) = o(1), where Z d∼ N(0, σ^2) is independent of ^ and ^, and C is some large constant that may take different value at each appearance. In addition, from (<ref>), the Cauchy–Schwarz inequality, Lemma <ref>, and (<ref>), we can deduce that with probability 1 - o(1), max_1 ≤ j ≤ 2p | (_j - _j )^⊤^( ^ - ^) /_j^⊤^_j| ≤max_1 ≤ j ≤ 2p _j - _j _2 ^ (^ - ^) _2 / | _j^⊤^_j | ≤ C Δ_n m_n^1/2√(s log p/n). Substituting (<ref>) and (<ref>) into (<ref>) yields that with probability 1 - o(1), P_1 ≤ C Δ_n m_n^1/2√(s log p/n). We next turn to the bound for term P_2. It is easy to see that P_2 ≤max_1 ≤ j ≤ 2p | _j^⊤^(^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) ^/_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤^(^ - ^) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | (_j - _j )^⊤ ( ^ - ^ ) ^/_j^⊤^_j| := P_21 + P_22 + P_23 + P_24. Regarding term P_21, in view of (<ref>) and the definition of _j, we have that with probability 1 - o(1), P_21 ≤max_1 ≤ j ≤ 2p | β^_j - β̂_j^ | + max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C Δ_n s √(log p/n) + max_1 ≤ j ≤ 2p | ( _j + ^_-j ( _j - _j ) ) ^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C Δ_n s √(log p/n) + max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| + max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j|. We will bound the last two terms on the very right-hand side of the expression above separately. Since for ℓ≠ j, n^-1 [ _j^⊤^_ℓ] = 0 due to zero correlation between _j and _-j^, and _j and _ℓ^ both have i.i.d. sub-Gaussian entries, we can show that for ℓ≠ j, ℙ( max_1 ≤ j ≤ 2p max_ℓ≠ j n^-1 | _j^⊤_ℓ^ | ≥ C √(log p/n)) ≤ C p^-1→ 0. This combined with (<ref>), the sparsity assumption that |J| = |() ∪() ∪()| ≲ s, and the result in (<ref>) yields that with probability 1 - o(1), the second term on the very right-hand side of (<ref>) above can be bounded as max_1 ≤ j ≤ 2p | _j^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p max_J': | J' | ≲ s _j^⊤_ J'∖{j}_2 ·^_J'∖{j} - ^_J'∖{j}_2 ≤ C √( s log p/n)·Δ_n s √(log p/n)≤ C Δ_n s √(log p/n), where the last inequality above holds due to the assumption that √( s log p/n)→ 0. By the Cauchy–Schwarz inequality, we can deduce that with probability 1 - o(1), the third term on the very right-hand side of (<ref>) above can be bounded as max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p ^ ( _j - _j ) _2 ·^_-j(^_-j - ^_-j)_2. An application of Lemma <ref>, (<ref>), and the sub-Gaussian assumption of X_j gives that with probability 1 - o(1), the second term on the right-hand side above can be bounded as max_1 ≤ j ≤ 2p n^ -1/2^_-j(^_-j - ^_-j)_2 ≤ n^ -1/2^(^ - ^)_2 + max_1 ≤ j ≤ 2p n^ -1/2^_j _2 | β_j - β̂_j| ≤ C Δ_n s √(log p/n). Then plugging (<ref>) into (<ref>) yields that max_1 ≤ j ≤ 2p | [^ ( _j - _j ) ]^⊤^_-j(^_-j - ^_-j) /_j^⊤^_j| ≤ C √(m_n log p/n)· C Δ_n s √(log p/n)≤ C Δ_n s √(log p/n), where the last inequality above is due to the assumption that √(slog p/n)→ 0 and m_n ≲ s. Hence, it follows from substituting (<ref>) and (<ref>) into (<ref>) that with probability 1 - o(1), P_21≤ C Δ_n s √(log p/n). We next proceed with considering term P_22 introduced in (<ref>). Observe that ^ - ^ = [0, - ] and ^ = (^⊤ , 0^⊤)^⊤. Then it holds that ( ^ - ^ ) ^ = 0. From (<ref>) and the Cauchy–Schwarz inequality, we can deduce that P_22 ≤max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) ^/_j^⊤^_j| + max_1 ≤ j ≤ 2p | _j^⊤ ( ^ - ^ ) (^ - ^ ) /_j^⊤^_j| ≤ C n^-1max_1 ≤ j ≤ 2p_j_2 · ( ^ - ^ ) (^ - ^ ) _2. Moreover, we have _j = _j + _-j^ (_j - _j). Since the components of _j are i.i.d. sub-Gaussian random variables, it is easy to see that ℙ (max_1 ≤ j ≤ 2pn^-1/2_j _2 ≥ C) → 0 for some large enough constant C > 0. Further, it follows from the sub-Gaussianity of X_j and the sparsity of _j and _j that max_1 ≤ j ≤ 2p n^-1/2_-j^ (_j - _j) _2 ≤ C m_n^1/2√(m_n log p/n) ≤ C m_n √(log p/n)→ 0. Thus, when m_n √(log p/n)→ 0 we have ℙ ( n^-1/2max_1 ≤ j ≤ 2p _j _2 ≥ C) → 0 . Similarly, based on Lemma <ref> and the sparsity of ^ and ^, it holds that with probability 1-o(1), n^-1/2 ( ^ - ^ ) (^ - ^ ) _2 ≤max_J': |J'| ≲ s( ∑_j ∈ J' n^-1^_j - ^_j _2^2 )^1/2·^_J' - _J'^_2 ≤ C s^1/2Δ_n · ( √(s log p/n) + Δ_n s √(log p /n) ) ≤ C Δ_n s √(log p/n), where the last inequality above holds due to Δ_n s^1/2→ 0. Consequently, combining the above three inequalities shows that with probability 1 - o(1), P_22≤ C Δ_n s√(log p/n). We now deal with term P_23 in (<ref>). In view of the Cauchy–Schwarz inequality, (<ref>), (<ref>), and (<ref>), we can obtain that with probability 1 - o(1), P_23 ≤max_1 ≤ j ≤ 2p _j - _j _2 /_j^⊤^_j·^(^ - ^ ) _2 ≤ C √(m_n log p/n)·Δ_n s √(log p/n) ≤ C Δ_n s √(log p/n). As for term P_24, since ( ^ - ^ ) = 0 it follows that with probability 1 - o(1), P_24 = max_1 ≤ j ≤ 2p | (_j - _j )^⊤ ( ^ - ^ ) (^ - ^) /_j^⊤^_j | ≤max_1 ≤ j ≤ 2p _j - _j _2 /_j^⊤^_j· ( ^ - ^ ) (^ - ^ ) _2 ≤ C √(m_n log p/n)·Δ_n s √(log p/n) ≤ C Δ_n s √(log p/n), where we have applied the bounds in (<ref>), (<ref>), and (<ref>). Consequently, plugging (<ref>), (<ref>), (<ref>), and (<ref>) into (<ref>) yields that with probability 1 - o(1), P_2≤ C Δ_n s √(log p/n). Now we proceed with dealing with term P_3. Note that P_3 ≤max_1 ≤ j ≤ 2p | _j ^⊤ ( - ^^ ) | ·| _j^⊤^_j - _j^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | . From (<ref>) and (<ref>), we can see that with probability 1 - o(1), max_1 ≤ j ≤ 2p n^-1/2_j _2 ≤max_1 ≤ j ≤ 2p n^-1/2_j _2 + max_1 ≤ j ≤ 2p n^-1/2_j - _j _2 ≤ C + C m_n Δ_n ≤ C. It follows from (<ref>), Condition <ref>, and the sub-Gaussian distribution of _j^ that with probability 1 - o(1), n^-1 | (_j - _j)^⊤^_j | ≤ C Δ_n m_n^1/2, n^-1 | _j^⊤ (^_j - _j^)| ≤ C Δ_n. Then with the aid of (<ref>), we can show that with probability 1 - o(1), min_1 ≤ j ≤ 2p n^-1 | _j^⊤^_j | ≥min_1 ≤ j ≤ 2p n^-1| _j^⊤^_j | - max_1 ≤ j ≤ 2p( n^-1 | (_j - _j)^⊤^_j | - n^-1 | _j^⊤ (^_j - _j^)| ) ≥ C - Cm_n Δ_n - C Δ_n ≥ C as m_n Δ_n → 0. As for the second component on the right-hand side of (<ref>) above, combining the results in (<ref>), (<ref>), and (<ref>) gives that with probability 1 - o(1), max_1 ≤ j ≤ 2p| _j^⊤^_j - _j^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | ≤max_1 ≤ j ≤ 2p| ( _j - _j )^⊤^_j | /| _j^⊤^_j | ·| _j^⊤^_j | + max_1 ≤ j ≤ 2p| _j^⊤ (_j^ - _j^) | /| _j^⊤^_j | ·| _j^⊤^_j | ≤ C n^-1 ( m_n^1/2Δ_n + Δ_n ). Regarding the first component on the right-hand side in (<ref>), from (^ - ^) = 0 we can deduce that max_1 ≤ j ≤ 2p n^-1| _j ^⊤ ( - ^^ ) | ≤max_1 ≤ j ≤ 2p n^-1| _j ^⊤ | + max_1 ≤ j ≤ 2p n^-1| _j ^⊤^ ( ^ - ^ ) | + max_1 ≤ j ≤ 2p n^-1| _j ^⊤ (^ - ^) ^| . Since d∼ N( 0, σ^2I_n ), it is easy to see that for the standard normal random variable Z, ℙ(max_1 ≤ j ≤ 2p n^-1| _j ^⊤ | > C √(log p/n)) = ℙ(max_1 ≤ j ≤ 2p n^-1_j_2 · |Z| > C √(log p/n)) ≤ℙ (|Z| > C √(log p)) → 0. Further, by Lemma <ref>, the sub-Gaussianity of X_j, and the sparsity of ^ and ^, we can obtain that with probability 1 - o(1), n^-1 | _j^⊤^ ( ^ - ^ ) | ≤ n^-1 | _j^⊤^ ( ^ - ^ ) | + n^-1 | _j^⊤^ ( ^ - ^ ) | ≤ C( √( slog p /n) + Δ_n s √(log p /n)) ≤ C √( slog p /n). Similarly, since (^ - ^) = 0, it holds that with probability 1 - o(1), n^-1 | _j^⊤ (^ - ^) ^ | = n^-1 | _j^⊤ (^ - ^) ( ^ -^ ) | ≤ C Δ_n s^1/2·√( s log p/n) ≤ C √( s log p/n). Consequently, by m_n≲ s in Condition <ref> we have that with probability 1 - o(1), P_3 ≤ C m_n^1/2Δ_n ·√(slog p/n)≤ C Δ_n s √(log p/n). Finally, a combination of (<ref>), (<ref>), (<ref>), and (<ref>) establishes (<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Using the definitions of W_j and w_j and the triangle inequality, we see that ∑_j = 1^p ℙ (| W_j - w_j | ≥ C √(n^-1log p )) ≤∑_j = 1^p ℙ( √(n)| |β_j - β_j | - |β_j + p - β_j + p| | ≥ C √(log p )) ≤∑_j = 1^p [ ℙ( √(n) |β_j - β_j | ≥ C √(log p ) / 2 ) + ℙ( √(n) |β_j + p - β_j + p| ≥ C √(log p ) /2 ) ]. The main idea of the proof is to exploit the decomposition in (<ref>) and the observation that the main term therein follows the normal distribution. Let us start with bounding the error term in (<ref>). We claim that with probability 1 - O (p^-3), max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k^ - β_k^)/_j^⊤^_j | ≤C m_n^1/2 s log p /√(n). From the fact that β^_j + p = 0 for 1 ≤ j ≤ p and the bound in (<ref>), since m_n^1/2 s log p /√(n)≪√(log p) we can deduce through the union bound that ∑_j = 1^p ℙ( √(n) |β_j - β_j | ≥ C √(log p ) / 2 ) ≤∑_j = 1^p ℙ( |_j^⊤ | /_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) + ∑_j = 1^p ℙ( max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | > C m_n^1/2 s log p /√(n)) ≤∑_j = 1^p ℙ( |_j^⊤ | /_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) + O(p^-1). Recall the result (<ref>) in Lemma <ref> and that _j^⊤/_j _2 ∼ N(0, σ^2). As m_n log p/n = o(1), it holds that for some large constant C> 0, ∑_j = 1^p ℙ( _j^⊤/_j _2 ·√(n)τ_j ≥ C √(log p ) / 3 ) ≤∑_j = 1^p ℙ( _j^⊤/_j _2 ≥C√(log p )) = p exp{ - C^2 log p / 2 }→ 0. Similarly, we can show that ∑_j = 1^p ℙ( _j + p^⊤/_j+p_2 ·√(n)τ_j ≥ C √(log p )) → 0. Plugging the two inequalities above into (<ref>) leads to the desired result in Lemma <ref>. It remains to establish (<ref>). Proof of (<ref>). Observe that for k ≠ j, n^-1_j^⊤^_k = n^-1(^_j - ^_-j_j) ^⊤^_k = n^-1_j ^⊤^_k - n^-1 (_j - _j )^⊤ (^_-j )^⊤^_k. Since _j and _k^ are uncorrelated, it follows from the sub-Gaussian assumption in Condition <ref> that for some constant C > 0, ℙ( n^-1 | _j ^⊤^_k | ≥ C √(log p /n)) ≤ 2 p^-3. In light of lemma <ref> and the sub-Gaussian assumption on _j, we can deduce that with probability 1 - O(p^-3), | n^-1 (_j - _j )^⊤ (^_-j )^⊤^_k | ≤ n^-1/2^_-j (_j - _j) _2 n^-1/2_k _2 ≤ C √(m_n log p/n). Plugging the above two results into (<ref>), when m_nlog p = o(n) an application of the union bound shows that with probability 1 - O( p^-1), max_ 1 ≤ j ≤ p max_k ≠ j n^-1 | _j^⊤^_k | ≤ C √(log p /n) + C √(m_nlog p /n) ≤ C √( m_n log p /n). Similarly, when √(log p /n) = o(1), we can show that there exists some constant C>0 such that with probability 1 - O(p^-1), min_1 ≤ j ≤ p n^-1_j^⊤^_j ≥ C. Consequently, plugging (<ref>), (<ref>), and (<ref>) into Lemma <ref> yields that with probability 1 - O (p^-3), max_1 ≤ j ≤ p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | ≤√(n)max_1 ≤ j ≤ pmax_k ≠ j | _j^⊤^_k | /min_1 ≤ j ≤ p | _j^⊤^_j | ·^ - ^_1 ≤ C √( m_n log p )· s √(log p/n) = C m_n^1/2 s log p /√(n), which establishes (<ref>). This concludes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> The intuition of the proof is that the sparsity of ^A implies the weak dependence among the components of the knockoff statistic vector = (W_1, …, W_p), which entails the weak dependence among the indicator functions 1 (W_j > t)'s. For 1 ≤ j ≤ p, let us define N_j = { l ∈ℋ_0: _j, l^A ≠ 0 }. From the sparsity assumption on ^A in Condition <ref>, we see that |N_j| ≤ m_n for any 1 ≤ j≤ p. Then we can obtain through expanding the variance that ( ∑_j ∈ℋ_01 (W_j > t) ) = ∑_j ∈ℋ_0∑_l ∈ N_j^c ∩ℋ_0 l ≠ j( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) + ∑_j ∈ℋ_0∑_l ∈ N_j ∪{ j }( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) := V_1(t) + V_2(t). We will deal with terms V_1(t) and V_2(t) above separately. Regarding the second term V_2(t), it follows from |N_j ∪{ j}| ≤ m_n + 1 that sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_2 (t) /p_0 G(t) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0∑_l ∈ N_j ∪{ j }ℙ (W_j ≥ t) /∑_j ∈ℋ_0ℙ (W_j ≥ t) ≤sup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] ∑_j ∈ℋ_0 (m_n + 1) ℙ (W_j ≥ t) /∑_j ∈ℋ_0ℙ (W_j ≥ t) ≤ m_n + 1. We claim that as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, (log p)^1/γsup_t ∈ (0, G^-1 ( c_1 q a_n / p ) ] V_1 (t) / [p_0 G (t)]^2 → 0. Therefore, combining (<ref>), (<ref>), and (<ref>) leads to the desired result of Lemma <ref>. It remains to establish (<ref>). Proof of (<ref>). Let {η_j}_j = 1^p be a sequence of independent random variables with η_j having density function given by h_j (t) = √(2)/√(π) a_j[1 - Φ( b_j^-1 t ) ] exp{ - t^2 / (2 v_ j^2) } + √(2)/√(π) b_j[1 - Φ( v_ j^-1 t ) ] exp{ - t^2 / (2 b_ j^2) }, where v_j = √( 2 ( e_j^2)^-1 (1 - (e_j, e_j+p)) ) and b_j =√( 2 ( e_j^2)^-1 (1 + (e_j, e_j+p)) ). The essential step in the proof is to show that for l ∈ N_j^c∩ℋ_0, ( |ξ_j| - |ξ_j+p|, |ξ_l| - |ξ_l+p|) d→ (η_j, η_l). We proceed with proving such result. Define δ_n = C m_n^1/2 s log p/√(n). We claim that for l ≠ j and l ∈ N_j^c∩ℋ_0, ℙ ( W_j ≥ t, W_l ≥ t ) ≤ℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t, W_l ≥ t ) ≥ℙ ( η_j ≥√(n) t + δ_n ) ℙ ( η_l ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t) ≥ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3), ℙ ( W_j ≥ t) ≤ℙ ( η_j ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). The proofs for (<ref>)–(<ref>) above are analogous. Without loss of generality, we will present only the proof of (<ref>) and postpone it to the end of the proof for Lemma <ref>. In view of (<ref>)–(<ref>) above and the definition of V_1 (t) in (<ref>), we can deduce that V_1 (t) = ∑_j ∈ℋ_0∑_l ∈ N_j^c ∩ℋ_0 l ≠ j( ℙ (W_j ≥ t, W_l ≥ t) - ℙ (W_j ≥ t) ℙ (W_l ≥ t) ) ≤∑_j ∈ℋ_0∑_ l ≠ j{ℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) (1 + O ( √(m_n (log p)^3/n)) - ℙ ( η_j ≥√(n) t + δ_n ) ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) } + O(p^-1) = ∑_j ∈ℋ_0∑_ l ≠ jℙ (√(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ℙ ( η_l ≥√(n) t - δ_n ) + ∑_j ∈ℋ_0∑_ l ≠ jℙ ( η_j ≥√(n) t - δ_n ) ℙ (√(n) t - δ_n ≤η_l ≤√(n) t + δ_n) + ∑_j ∈ℋ_0∑_ l ≠ jℙ ( η_j ≥√(n) t - δ_n ) ℙ ( η_l ≥√(n) t - δ_n ) · O( √(m_n (log p)^3 /n)) + O(p^-1) := V_11(t) + V_12(t) + V_13(t) + O(p^-1). Recall that p_0 G(t) = ∑_j ∈ℋ_0ℙ (W_j ≥ t). Then it follows from the definition of V_11(t) and (<ref>) that V_11(t)/ [p_0 G(t)]^2 ≤∑_j ∈ℋ_0∑_ l ≠ jℙ (√(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ℙ ( η_l ≥√(n) t - δ_n ) /[ ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n) )) + O(p^-2) ]^2. We will consider two ranges t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)) and t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j), G^-1 (c_1 q a_n/p) ] separately. For the first range t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), we can see that √(n) t is upper bounded by a constant. Since δ_n = o(1) by the assumption that m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0, it follows that √(n) t + δ_n and √(n) t - δ_n are both of a constant order. Hence, by the definition of the density function h_j(·) of η_j shown in (<ref>), max_1 ≤ j ≤ p h_j(u) is bounded by a constant for u ∈ [√(n) t - δ_n, √(n) t + δ_n], and C_1 ≤min_1≤ j ≤ pℙ (η_j ≥√(n) t + δ_n) ≤max_1 ≤ j ≤ pℙ (η_j ≥√(n) t - δ_n) ≤ C_2 for some positive constants C_1 < C_2. Thus, it is easy to see that sup_t ∈ (0, 4 n^-1/2max_1 ≤ j ≤ p (v_j b_j))V_11(t)/ [p_0 G(t)]^2 ≤ C p_0^2 δ_n max_1≤ j ≤ psup_u ∈ [√(n) t - δ_n, √(n) t + δ_n] h_j (u) max_1 ≤ j ≤ pℙ (η_j ≥√(n) t - δ_n) / p_0^2 [min_1 ≤ j ≤ pℙ (η_j ≥√(n) t + δ_n) ]^2 ≤ C δ_n = C m_n^1/2 s log p/√(n). We proceed with considering the second range t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j), G^-1 (c_1 q a_n/p) ). An application of similar arguments as for (<ref>) shows that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ≤ C √(n) G^-1 (c_1 q a_n/p) ·δ_n. Moreover, it follows from plugging t = G^-1(c_1 q a_n/p) into (<ref>) and taking summation over j ∈ℋ_0 that c_1 q a_n p_0/p ≤∑_j ∈ℋ_0ℙ (η_j ≥√(n) G^-1(c_1 q a_n/p) - δ_n) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). Then from the density function h_j(t) for η_j, we can obtain through some direct calculations that ℙ ( η_j ≥ t) = 2 [1 - Φ(v_j^-1 t)] [1 - Φ(b_j^-1 t)]. Further, combining (<ref>) and (<ref>) yields that G^-1(c_1 q a_n/p) = O(√(log p/n) ). Substituting this bound into (<ref>) implies that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ≤ C m_n^1/2 s (log p)^3/2/√(n), where in the last inequality above we have utilized the definition of δ_n. Thus as m_n^1/2 s (log p)^3/2/√(n)→ 0, it holds that max_1 ≤ j ≤ psup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]| ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - δ_n ) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) - 1 | ≤ C m_n^1/2 s (log p)^3/2/√(n)→ 0. Since p_0 G(t) ≥ c_1 q a_n p_0 / p →∞ for 0≤ t ≤ G^-1 (c_1 q a_n/p), it follows from taking summation over j ∈ℋ_0 on both sides of (<ref>) that as m_n^1/2 (log p)^3/2 / √(n)→ 0, ∑_j ∈ℋ_0ℙ (η_j ≥√(n) t - δ_n) ≥ C ( c_1 q a_n p_0/ p + O(p^-2 ) ) →∞, which along with (<ref>) implies that ∑_j ∈ℋ_0ℙ (η_j ≥√(n) t + δ_n) ≥ C ( c_1 q a_n p_0/ p + O(p^-2 ) ) →∞. Combining this with (<ref>), we can further bound the ratio in (<ref>) in the second range of t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)) as sup_t ∈ [4 n^-1/2max_1 ≤ j ≤ p (v_j b_j)), G^-1 (c_1 q a_n/p)]V_11(t)/ [p_0 G(t)]^2 ≤{[∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) ]^2 /[ ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) ]^2 + ∑_j ∈ℋ_0ℙ ( √(n) t - δ_n ≤η_j ≤√(n) t + δ_n) /∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + δ_n ) } ×(1 + O ( √(m_n (log p)^3/n) + p^-2)) ≤ C m_n^1/2 s (log p)^3/2/√(n). Hence, we see from the above result and (<ref>) that sup_t ∈ (0, G^-1(c_1 q a_n/p))V_11(t)/ [p_0 G(t)]^2 ≤ C m_n^1/2 s (log p)^3/2/√(n). In a similar manner, we can deduce that sup_t ∈ (0, G^-1(c_1 q a_n/p))V_12(t)/ [p_0 G(t)]^2 ≤ C m_n^1/2 s (log p)^3/2/√(n) and sup_t ∈ (0, G^-1(c_1 q a_n/p))V_13(t)/ [p_0 G(t)]^2 ≤ C √(m_n (log p)^3/n). Combining (<ref>) and (<ref>)–(<ref>) yields (<ref>) as m_n^1/2 s (log p)^3/2 + 1/γ/√(n)→ 0. This completes the proof of (<ref>). It remains to establish (<ref>). Proof of (<ref>). Note that for j ∈ℋ_0, it holds that β_j^ = β_j + p^ = 0 under the setting of the linear model. Then it follows that W_j = | β_j | - |β_j+p| = | β_j - β_j^ | - | β_j + p - β_j+p^ |. For 1 ≤ j ≤ 2p, let us define ξ_j = √(n)τ_j ·_j^⊤/_j _2. In view of the expression in (<ref>) and the bound of the remainder term established in (<ref>), an application of the total probability inequality gives that ℙ( W_j ≥ t, W_l ≥ t ) ≤ℙ( | ξ_j | - | ξ_j + p| ≥√(n) t - δ_n, | ξ_l | - | ξ_l + p| ≥√(n) t - δ_n ) + ℙ( max_1 ≤ j ≤ 2p| ∑_k ≠ j√(n)_j^⊤^_k (β_k - β_k^)/_j^⊤^_j | > δ_n ) = ℙ( | ξ_j | - | ξ_j + p| ≥√(n) t - δ_n, | ξ_l | - | ξ_l + p| ≥√(n) t - δ_n ) + O (p^-3). It suffices to consider probability ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n ) for t ∈ (0, √(n) G^-1 (c_1 q a_n/p) ]. A useful observation is that ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n ) ≤ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n, max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| }≤ C √(log p )) + ℙ( max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| } > C √(log p)) := P_1 + P_2. We will consider terms P_1 and P_2 above separately. Let us first deal with term P_2. From the definition of ξ_j, (<ref>) in Lemma <ref>, and the fact that _j^⊤/_j _2 d∼ N(0, 1), we can obtain through the union bound that as m_n log p /n→ 0 and for some large constant C > 4 ( e_j^2)^-1/2, ℙ ( |ξ_j| ≥ C √(log p) ) ≤ℙ( | _j^⊤/_j _2 | ≥ 2 C ( e_j^2)^1/2√(log p) / 3 ) + ℙ (√(n)τ_j ≥ 3 ( e_j^2)^-1/2 /2 ) = O (p^-3). Hence, the inequality above implies that P_2 = O( p^-3). We next proceed with analyzing term P_1. Given ^, denote by f_ξ, ξ_j+p (x, y) the density of (ξ_i, ξ_j+p) and f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) the conditional density of (ξ_l, ξ_l+p ) | (ξ_j, ξ_j+p). Then probability P_2 can be written as ℙ( | ξ_j | - | ξ_j + p| ≥ t - δ_n, | ξ_l | - | ξ_l + p| ≥ t - δ_n, max{ | ξ_j |, | ξ_j + p|, | ξ_l |, | ξ_l + p| ≤ C √(log p )) = _^[∫_|x| - |y| ≥ t - δ_n |x| ≤ C √(log p) |y| ≤ C √(log p) f_ξ, ξ_j+p (x, y) ·∫_|u| - |w| ≥ t - δ_n |u| ≤ C √(log p) |w| ≤ C √(log p) f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) du dv dx dy ]. Since d∼ N( 0, I_n) and is independent of ^, it is easy to see that for j ≠ l, conditional on ^ we have (ξ_j, ξ_j + p, ξ_l, ξ_l + p)^⊤| ^d∼ N ( 0, ), where the covariance matrix is given by = [ _11_12; _21_22 ] with _11 = [ n τ_j^2 n _j^⊤_j + p/ |_j^⊤_j^| |_j + p^⊤_j + p^|; n _j^⊤_j + p/ |_j^⊤_j^| |_j + p^⊤_j + p^| n τ_j + p^2 ], _12 = _21^⊤ = [ n _j^⊤_l/ |_j^⊤_j^| |_l^⊤_l^| n _j^⊤_l + p/ |_j^⊤_j^| |_l + p^⊤_l + p^|; n _l^⊤_j + p/ |_l^⊤_l^| |_j + p^⊤_j + p^| n _l +p^⊤_j + p/ |_l + p^⊤_l +p^| |_j + p^⊤_j + p^| ], _22 = [ n τ_l^2 n _l^⊤_l + p/ |_l^⊤_l^| |_l + p^⊤_l + p^|; n _l^⊤_l + p/ |_l^⊤_l^| |_l + p^⊤_l + p^| n τ_l + p^2 ]. It follows from the conditional distribution of the multivariate normal distribution that given ^, f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, v | x, y) = 1 /2π | _22 - _21_11^-1_12 |^1/2× exp{ - 1/2[ ( u v ) - _21_11^-1( x y ) ]^⊤ (_22 - _21_11^-1_12)^-1 ·[ ( u v ) - _21_11^-1( x y ) ] }. For l ≠ j and l ∈ N_j^c, it holds that (e_j, e_l) = _j, l^A/_j, j^A _l, l^A = 0. Since _j, l^A = _j, l+p^A = _j + p, l^A = _j+p, l+p^A due to the symmetric structure of , we also have (e_j, e_l+p) = (e_j+p, e_l) = (e_j+p, e_l+p ) = 0 for l ≠ j and l ∈ N_j^c. Then it follows from (<ref>) in Lemma <ref> that for l ≠ j and l ∈ N_j^c, with probability 1- O(p^-3) n^-1_j^⊤_l ≤ C √(m_n log p/n), n^-1_j^⊤_l+p≤ C √(m_n log p/n), n^-1_j+p^⊤_l≤ C √(m_n log p/n), n^-1_j+p^⊤_l+p≤ C √(m_n log p/n). Similarly, for 1 ≤ j ≤ 2p we can show that with probability 1 - O(p^-3), n^-1_j^⊤^_j ≥ C. _j^⊤_l = (_j^ - _-j^_j)^⊤ (_l^ - _-l^_l) = ( _j + ^_-j (_j - _j) )^⊤ ( _l + _-l^ (_l - _l) ) = _j^⊤_l + _j^⊤_-l^ (_l - _l) + _l^⊤_-j^ (_j - _j) + [ _-j^ (_j - _j) ]^⊤_-l^ (_l - _l). For l ≠ j and l ∈ N_j^c, we know that [e_j e_l] = (e_j, e_l) = _j, l^A/_j, j^A _l, l^A = 0. In addition, since e_j and e_l are sub-Gaussian random variables, we can obtain by applying Bernstein’s inequality for sub-exponential random variables that ℙ (n^-1 | _j^⊤_l | ≥ C √(log p/n)) = O(p^-3). Moreover, in view of Lemma <ref>, sparsity of _j and _j and the fact that (e_j, X_l) = 0 for j ≠ l, we have with probability 1 - O(p^-3) n^-1| _j^⊤_-l^ (_l - _l) | ≤ n^-1| _j^⊤_j^ (_l, j - _l) | + n^-1| ∑_k ≠ j, l_j^⊤_k^ (_l,k - _l, k) | ≤ n^-1| _j^⊤_j^ (_l, j - _l) | + n^-1[max_J': |J'| ≲ m_n∑_k ≠ j, l k ∈ J' (_j^⊤_k^ )^2 ∑_k ≠ j, l k ∈ J' (_l,k - _l, k)^2 ] ^2 ≤ C√(m_n log p/n) +C √(m_n log p/n)·√(m_n log p/n)≤ C √(m_n log p/n). Similarly, it holds that with probability 1 - O(p^-3), n^-1| _l^⊤_-j^ (_j - _j) | ≤ C √(m_n log p/n). Additionally, we have that with probability 1 - O(p^-3), | [ _-j^ (_j - _j) ]^⊤_-l^ (_l - _l) | ≤ C √(m_n log p/n)·√(m_n log p/n)≤ C √(m_n log p/n). Therefore, for l ≠ j and l ∈ N_j^c, it follows that with probability 1 - O(p^-3), n^-1_j^⊤_l ≤ C √(m_n log p/n). Then from (<ref>), (<ref>), and the definition of _12, we can obtain that with probability 1 - O(p^-3), _12_max≤ C √(m_n log p/n). We have shown in (<ref>) that _12_max≤ C √(m_n log p/n) with probability 1 - O(p^-3). Similarly, when e_j^2 e_j+p^2 - ([e_j e_j+p] )^2 > C for some constant C > 0, it can be shown that | V_11 | ≥ C and |V_22| ≥ C with probability 1 - O(p^-3). Let us define an event 𝒞 = {^: _12_max≤ C_1 √(m_n log p/n), |_22| ≥ C_2, |_11| ≥ C_2, _11_max≤ C_3, _22_max≤ C_3}. We have shown that ℙ (𝒞) ≥ 1- O(p^-3). Then it is straightforward to see that conditional on event 𝒞, we have 1 /2π | _22 - _21_11^-1_12 |^1/2 = 1/2 π | V_22 |^-1/2(1 + O ( m_n log p/n) ) and _22^-1 - (_22 - _21_11^-1_12)^-1_max≤ C m_n log p/n. In addition, given event 𝒞 and the range that |x| ≤ C √(log p) and |y| ≤ C √(log p), it holds that _21_11^-1( x y ) _2 ≤ C √(m_n/n)log p. Further, given event 𝒞 and that max{|u|, |w|, |x|, |y|}≤ C √(log p), it follows from (<ref>)–(<ref>) that as m_n (log p)^3/n = o(1), | [ ( u w ) - _21_11^-1( x y ) ]^⊤ (_22 - _21_11^-1_12)^-1[ ( u w ) - _21_11^-1( x y ) ] - ( u w )^⊤_22^-1( u w ) | ≤ C √(m_n (log p)^3/n). Hence, substituting the bounds in (<ref>) and (<ref>) into (<ref>) yields that as m_n (log p)^3/n = o(1), f_ξ_l, ξ_l+p |(ξ_j, ξ_j+p) (u, w | x, y) = 1 /2π | _22 |^1/2exp{ - 1/2( u w )^⊤_22^-1( u w ) }·(1 + O ( √(m_n (log p)^3/n))) = f_ξ_l, ξ_l+p (u, w) (1 + O ( √(m_n (log p)^3/n))), which entails that (ξ_l, ξ_l+p) is asymptotically independent of (ξ_j, ξ_j+p) for l ≠ j and l ∈ N_j^c. By plugging (<ref>) into (<ref>), we can deduce that P_1 ≤𝔼{1 (𝒞) ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ×ℙ( |ξ_l| - |ξ_l+p| ≥ t - δ_n, max{ |ξ_l|, |ξ_l+p|}≤ C √(log p) | ^)} ×(1 + O ( √(m_n (log p)^3/n))) + ℙ (𝒞^c ), where ℙ (𝒞^c ) = O(p^-3). We next show that given ^, |ξ_j| - |ξ_j+p| converges in distribution to η_j. Given ^, we see that (ξ_j, ξ_j+p) d∼ N( 0, _11). Without ambiguity, let us denote by _11 = [ σ_1, n^2 ρ_n σ_1, nσ_2, n; ρ_n σ_1, nσ_2, n σ_2, n^2 ] for simpler notation, where σ_1, n^2 = n τ_j^2, σ_2, n^2 = n τ_j + p^2, and ρ_n = _j^⊤_j+p / (_j_2 _j+p_2 ). We define an event ℰ = {|σ_1, n^2 - ( e_j^2)^-1 | ≤ C √(m_n log p/n), |σ_2, n^2 - ( e_j+p^2)^-1 | ≤ C √(m_n log p/n), | ρ_n - (e_j, e_j+p) | ≤ C √(m_n log p/n)}. It follows from Lemma <ref> that ℙ (ℰ) ≥ 1 - O(p^-3). Some straightforward calculations show that for t > 0, given ^ the density of |ξ_j| - |ξ_j+p| can be written as f_|ξ_j| - |ξ_j+p| (t) = √(2)/√(π)a_1, n[1 - Φ( a_2, n^-1 t ) ] exp{ - t^2 / (2 a_1, n^2) } + √(2)/√(π)a_3, n[1 - Φ( a_4,n^-1 t ) ] exp{ - t^2 / (2 a_3, n^2) }, where a_1, n = √(σ_1, n^2 + σ_2, n^2 - 2 ρ_n σ_1, nσ_2, n), a_2, n = σ_1, nσ_2, n a_1, n√( (1 - ρ_n^2))/σ_2, n^2 - ρ_n σ_1, nσ_2, n, a_3, n = √(σ_1, n^2 + σ_2, n^2 + 2 ρ_n σ_1, nσ_2, n), a_4, n = σ_1, nσ_2, n a_3, n√( (1 - ρ_n^2))/σ_2, n^2 + ρ_n σ_1, nσ_2, n . Recall the notation that v_j = √( 2 ( e_j^2)^-1 (1 - (e_j, e_j+p)) ) and b_j =√( 2 ( e_j^2)^-1 (1 + (e_j, e_j+p)) ). It holds that (e_j^2 ) = (_j, j^A)^-1 = (^A_j+p, j+p)^-1 = (e_j+p^2) due to the symmetry of ^A. On event ℰ, we have that | a_1, n / v_j - 1 | ≤ C √(m_n log p/n), | a_2, n / b_j - 1 | ≤ C √(m_n log p/n), |a_3, n / b_j - 1 | ≤ C √(m_n log p/n), | a_4, n / v_j - 1 | ≤ C √(m_n log p/n). Thus, in view of the definition of h_j(t) in (<ref>) and (<ref>), it follows that as |t| ≤ C√(log p), f_|ξ_j| - |ξ_j+p| (t) = h_j (t) (1 + O ( √(m_n (log p)^3/n))). With the aid of the above result, we can deduce that on event ℰ, ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ≤ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, |ξ_j| - |ξ_j+p| ≤ C √(log p) | ^) ≤(∫_ t - δ_n ^ C √(log p) h_j (u) du ) (1 + O ( √(m_n (log p)^3/n))) = ℙ ( t - δ_n ≤η_j ≤ C √(log p) ) (1 + O ( √(m_n (log p)^3/n))) = [ℙ ( η_j ≥ t - δ_n ) - ℙ (η_j > C √(log p) ) ] (1 + O ( √(m_n (log p)^3/n))). Moreover, in light of (<ref>) it is easy to see that ℙ (η_j > C √(log p) ) = O(p^-3) for some large constant C, which together with (<ref>) leads to ℙ( |ξ_j| - |ξ_j+p| ≥ t - δ_n, max{ |ξ_j|, |ξ_j+p|}≤ C √(log p) | ^) ≤ℙ ( η_j ≥ t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O (p^-3). Plugging (<ref>) into (<ref>) shows that P_1 ≤ℙ ( η_j ≥ t - δ_n ) ℙ ( η_l ≥ t - δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-3). Finally, combining (<ref>), (<ref>), (<ref>), and (<ref>) yields (<ref>). Similarly, we can also establish (<ref>)–(<ref>). This completes the proof of Lemma <ref>. §.§ Proof of Lemma <ref> Let us first prove (<ref>). In the proof of Lemma <ref> in Section <ref>, we have established the lower bound and upper bound for ℙ (W_j ≥ t) in (<ref>) and (<ref>), respectively. Recall the definitions that δ_n = C m_n^1/2 s log p/√(n) and b_n = C Δ_n s √(log p/n). For the numerator and denominator in (<ref>), we can write that p_0( G(t - b_n) - G(t + b_n) ) = ∑_j ∈ℋ_0[ ℙ (W_j ≥ t - b_n ) - ℙ (W_j ≥ t + b_n ) ] ≤∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - √(n) b_n -δ_n ) (1 + O ( √(m_n (log p)^3/n))) - ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t + √(n) b_n + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-2) ≤∑_j ∈ℋ_0ℙ ( √(n) t - √(n) b_n - δ_n ≤η_j ≤√(n) t + √(n) b_n + δ_n ) + ∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t - √(n) b_n -δ_n ) · O ( √(m_n (log p)^3/n)) + O(p^-2) and p_0 G(t) ≥∑_j ∈ℋ_0ℙ (η_j ≥√(n) t + δ_n ) (1 + O ( √(m_n (log p)^3/n))) + O(p^-2), respectively. It follows from (<ref>)–(<ref>), similar arguments as for (<ref>), and G^-1 (c_1 q a_n/p) = O(√(log p/n)) in the proof of Lemma <ref> that as √(n) G^-1 (c_1 q a_n/p) (√(n) b_n+ δ_n ) → 0, sup_t ∈ (0, G^-1 (c_1 q a_n/p)] G(t - b_n ) - G(t + b_n) / G(t) ≤ C √(log p) (√(n) b_n + δ_n) + C √(m_n (log p)^3/n) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ). Thus, we see that when m_n^1/2 s (log p)^3/2 + 1/γ/√(n) + Δ_n s (log p)^1 + 1 /γ→ 0, the desired result (<ref>) holds. We next proceed with establishing (<ref>). In view of Condition <ref>, it holds that p_1^-1∑_j ∈ℋ_1ℙ ( W_j < - t ) ≤ G(t) for t = O(√(n^-1log p)). Moreover, we have b_n = C Δ_n s √(log p/n) = o(G^-1(c_1 q a_n /p)) due to the assumption Δ_n s → 0 and G^-1 (c_1 q a_n /p) = O (√(log p/n) ). Then it follows that a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p ) + b_n ) ≤ a_n^-1 (p - p_0) G( G^-1 ( c_1 q a_n / p ) - b_n ) = c_1 q (p - p_0) / p + a_n^-1(p - p_0) [ G ( G^-1 ( c_1 q a_n / p ) - b_n ) - G ( G^-1 ( c_1 q a_n / p ) ) ]. For notational simplicity, let us define t_n = G^-1 ( c_1 q a_n / p ). With the aid of the upper and lower bounds for ℙ (W_j ≥ t) given in (<ref>) and (<ref>), we can deduce that G ( t_n - b_n ) - G ( t_n ) ≤ p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) (1 + O( √(m_n (log p)^3 /n)) ) - p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n + δ_n ) (1 + O( √(m_n (log p)^3 /n)) ) + O(p^-2) = p_0^-1∑_j ∈ℋ_0ℙ (√(n) t_n - √(n) b_n - δ_n ≤η_j ≤√(n) t_n + δ_n ) + p_0^-1∑_j ∈ℋ_0ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) · O( √(m_n (log p)^3 /n)) + O(p^-2). An application of similar arguments as for (<ref>) leads to ℙ (√(n) t_n - √(n) b_n - δ_n ≤η_j ≤√(n) t_n + δ_n ) /ℙ (η_j ≥√(n) t_n + δ_n ) ≤ C √(t_n) ( √(n) + δ_n ) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) and | ℙ ( η_j ≥√(n) t_n - √(n) b_n - δ_n ) /ℙ (η_j ≥√(n) t_n + δ_n ) - 1 | ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ). It follows from the lower bound in (<ref>) and G(t_n) = G(G^-1(c_1 q a_n/p)) = c_1 q a_n/p that as m_n (log p)^3/n→ 0, p_0^-1∑_j ∈ℋ_0ℙ (η_j ≥√(n) t_n + δ_n ) ≤ C (c_1 q a_n/p + O(p^-3) ) ≤ C c_1 q a_n/p. Therefore, combining (<ref>)–(<ref>) shows that G ( t_n - b_n ) - G ( t_n ) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q a_n /p + C √(m_n (log p)^3 /n)·c_1 q a_n /p + O(p^-2) ≤ C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q a_n /p + O(p^-2). Finally, substituting the above bound into (<ref>) yields that as m_n^1/2 s (log p)^3/2 /√(n) + Δ_n s (log p) → 0, a_n ^-1∑_j ∈ℋ_1ℙ( W_j < - G^-1 ( c_1 q a_n / p + Δ_n ) ≤c_1 q (p - p_0)/p + C ( m_n^1/2 s (log p)^3/2/√(n) + Δ_n s log p ) ·c_1 q (p - p_0)/p + O( p - p_0 / a_n p^2) → 0, where we have used the assumption that p_0/p → 1. This establishes (<ref>), which concludes the proof of Lemma <ref>.
http://arxiv.org/abs/2307.04111v1
20230709070831
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
[ "José Miguel Mateos-Ramos", "Christian Häger", "Musa Furkan Keskin", "Luc Le Magoarou", "Henk Wymeersch" ]
eess.SP
[ "eess.SP" ]
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication José Miguel Mateos-Ramos, Student Member, IEEE, Christian Häger, Member, IEEE, Musa Furkan Keskin, Member, IEEE, Luc Le Magoarou, Member, IEEE, Henk Wymeersch, Senior Member, IEEE This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718. José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]). Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]). Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= We study model-based end-to-end learning in the context of integrated sensing and communication (ISAC) under hardware impairments. A monostatic orthogonal frequency-division multiplexing (OFDM) sensing and multiple-input single-output (MISO) communication scenario is considered, incorporating hardware imperfections at the ISAC transceiver antenna array. To enable end-to-end learning of the ISAC transmitter and sensing receiver, we propose a novel differentiable version of the orthogonal matching pursuit (OMP) algorithm that is suitable for multi-target sensing. Based on the differentiable OMP, we devise two model-based parameterization strategies to account for hardware impairments: (i) learning a dictionary of steering vectors for different angles, and (ii) learning the parameterized hardware impairments. For the single-target case, we carry out a comprehensive performance analysis of the proposed model-based learning approaches, a neural-network-based learning approach and a strong baseline consisting of least-squares beamforming, conventional OMP, and maximum-likelihood symbol detection for communication. Results show that learning the parameterized hardware impairments offers higher detection probability, better angle and range estimation accuracy, lower communication symbol error rate (SER), and exhibits the lowest complexity among all learning methods. Lastly, we demonstrate that learning the parameterized hardware impairments is scalable also to multiple targets, revealing significant improvements in terms of ISAC performance over the baseline. Hardware impairments, integrated sensing and communication (ISAC), joint communication and sensing (JCAS), machine learning, model-based learning, orthogonal matching pursuit (OMP). § INTRODUCTION Next-generation wireless communication systems are expected to operate at higher carrier frequencies to meet the data rate requirements necessary for emerging use cases such as smart cities, e-health, and digital twins for manufacturing <cit.>. Higher carrier frequencies also enable new functionalities, such as ISAC. ISAC aims to integrate radar and communication capabilities in one joint system, which enables hardware sharing, energy savings, communication in high-frequency radar bands, and improved channel estimation via sensing-assisted communications, among other advantages <cit.>. ISAC has been mainly considered by means of dual-functional waveforms. For instance, radar signals have been used for communication <cit.>, while communication waveforms have proven to yield radar-like capabilities <cit.>. Furthermore, optimization of waveforms to perform both tasks simultaneously has also been studied <cit.>, where the results depend on the cost function to optimize and the ISAC optimization variables. However, conventional ISAC approaches degrade in performance under model mismatch, i.e., if the underlying reality does not match the assumed mathematical models. In particular at high carrier frequencies, hardware impairments can severely affect the system performance and hardware design becomes very challenging <cit.>. This increases the likelihood of model mismatch in standard approaches, and problems become increasingly difficult to solve analytically if hardware impairments are considered. DL approaches based on large NN have proven to be useful under model mismatch or complex optimization problems <cit.>. DL does not require any knowledge about the underlying models as it is optimized based on training data, which inherently captures the potential impairments of the system. DL has been investigated in the context of ISAC for a vast range of applications, such as predictive beamforming in vehicular networks <cit.>, waveform design <cit.> and channel estimation <cit.> in IRS-assisted ISAC scenarios, multi-target sensing and communication in THz transmissions <cit.>, or efficient resource management <cit.>. However, most previous works on DL for ISAC consider single-component optimization, either at the transmitter or receiver. On the other hand, end-to-end learning <cit.> of both the transmitter and receiver has proven to enhance the final performance of radar <cit.> and communication <cit.> systems. End-to-end learning in ISAC was applied by means of an AE architecture in <cit.>, to perform single-target angle estimation and communication symbol estimation, under hardware impairments. This was recently extended to multiple targets in <cit.>, although without considering impairments, where the AE outperformed conventional ESPRIT <cit.> in terms of angle estimation for single- and dual-snapshot transmissions. Nevertheless, DL approaches often lack interpretability and require large amounts of training data to obtain satisfactory performance. To overcome the disadvantages of large DL models, MB-ML <cit.> instead parameterizes existing models and algorithms while maintaining their overall computation graph as a blueprint. This allows training initialization from an already good starting point, requiring less training data to optimize, and typically also offers a better understanding of the learned parameters. A popular example of MB-ML learning is deep unfolding <cit.>, where iterative algorithms are “unrolled” and interpreted as multi-layer computation graphs. In the context of sensing, deep unfolding of the fixed-point continuation algorithm with one-sided l_1-norm was applied to angle estimation of multiple targets <cit.>, showing enhanced accuracy with respect to DL and model-based benchmark approaches. In <cit.>, the ISTA was unfolded to perform angle estimation in the presence of array imperfections. Related to communications, deep unfolding has been applied to massive MIMO channel estimation in <cit.>, where classical steering vector models are used as a starting point and then optimized to learn the system hardware impairments, by unfolding the matching pursuit algorithm <cit.>. This approach was later refined to reduce the required number of learnable parameters in <cit.>. Previous MB-ML approaches <cit.> exhibit three primary shortcomings that can limit their effectiveness in practical scenarios. Firstly, they focus only on receiver learning; however, end-to-end learning of transmitter and receiver, which holds great potential given its promising performance in model-free DL applications <cit.>, remains unexplored in MB-ML. Secondly, sensing works <cit.> only investigate angle estimation, although range estimation is also required to estimate target locations. Hence, end-to-end MB-ML for multi-target positioning has not been studied before. Finally, while MB-ML has been utilized to address individual challenges related to sensing and communications, its untapped potential to significantly improve system performance in ISAC applications remains undiscovered. In view of the current literature on DL and MB-ML for ISAC, three questions arise: (i) How can efficient end-to-end MB-ML strategies be developed for multi-target positioning? (ii) What computational and performance benefits can be harnessed by employing MB-ML in ISAC systems compared to large DL models and model-based approaches? (iii) To what extent can ISAC trade-offs be improved under hardware impairments by employing MB-ML strategies compared to large DL models and model-based approaches? This paper aims to answer the above questions by studying end-to-end MB-ML for ISAC, focusing on the effect of hardware impairments in the ISAC transceiver ULA. Considering a MIMO monostatic sensing and MISO communication scenario (as depicted in Fig. <ref>), we propose novel end-to-end MB-ML strategies for joint optimization of the ISAC transmitter and sensing receiver, suitable for both single- and multi-target scenarios. Building upon our preliminary analysis in <cit.>, the main contributions of this work can be summarized as follows: * Multi-target position estimation via end-to-end learning of OFDM ISAC systems: For the first time in the literature, we investigate end-to-end learning of OFDM ISAC systems under hardware impairments at the ISAC ULA. To combat these hardware imperfections, we introduce novel learning architectures to simultaneously optimize the ISAC beamformer and sensing receiver. OFDM transmission enables joint angle and range (and, hence, position) estimation of multiple targets, significantly extending the single-carrier models and methods in our previous work <cit.>, and the recent works <cit.>. * MB-ML via differentiable OMP: Expanding upon the foundation laid by <cit.>, we propose a differentiable version of the OMP algorithm that is suitable for single- and multi-target sensing. This new algorithm allows for end-to-end gradient-based optimization, where we consider two different MB-ML parameterization approaches. The first approach learns a dictionary of steering vectors at each OMP iteration, extending our results in <cit.> to joint range-angle estimation and multiple targets. The second approach is new compared to <cit.> and directly learns the parameterized ULA impairments at each iteration. This offers the advantage of drastically reducing the number of parameters to be learned. * Single- and multi-target performance comparison and ISAC trade-off characterization: We first consider the single-target case (corresponding to one OMP iteration) and compare different solutions based on the extent of model knowledge: (i) NNBL[Note that the neural-network architectures in <cit.> do not directly apply to the scenario considered here due to the use of OFDM signals.], representing no knowledge of the system model, (ii) the two MB-ML approaches, where model knowledge is utilized, but impairments are learned, and (iii) a strong baseline, which fully relies on the mathematical description of the system model under no hardware impairments. Our results show that under hardware impairments, the new MB-ML ULA impairment learning outperforms all other approaches in terms of target detection and range-angle estimation, with fewer trainable parameters. Lastly, we show that impairment learning scales smoothly also to multiple targets, where it achieves better sensing and communication performance than the baseline. In the rest of this paper, we first describe the mathematical ISAC system model in Sec. <ref>. Then, we describe the two approaches to perform target positioning and communication: the baseline in Sec. <ref>, and MB-ML in Sec. <ref>. The main ISAC results are presented and discussed in Sec. <ref> before the concluding remarks of Sec. <ref>. Notation. We denote column vectors as bold-faced lower-case letters, a, and matrices as bold-faced upper-case letters, A. A column vector whose entries are all equal to 1 is denoted as 1. The identity matrix of size N× N is denoted as I_N. The transpose and conjugate transpose operations are denoted by (·)^ and (·)^, respectively. The i-th element of a vector and the (i,j)-th element of a matrix are denoted by [a]_i and []_i,j. The element-wise product between two matrices is denoted by ⊙, while ⊘ denotes element-wise division, and ⊗ denotes the Kronecker product. · denotes matrix vectorization operator. Sets of elements are enclosed by curly brackets and intervals are enclosed by square brackets. The set {x∈|x≥0} is denoted as _≥0. The cardinality of a set 𝒳 is denoted by 𝒳. The uniform distribution is denoted by , and denotes the circularly-symmetric complex distribution. The Euclidean vector norm is represented by ‖·‖_2, while the matrix Frobenius norm is denoted by ‖·‖_F. The indicator function is denoted by 𝕀{·}. § SYSTEM MODEL This section provides the mathematical models for the received sensing and communication signals, the ISAC transmitted signal and the hardware impairments. In Fig. <ref>, a block diagram of the considered ISAC system is depicted. §.§ Multi-target MIMO Sensing We consider an ISAC transceiver consisting of an ISAC transmitter and a sensing receiver sharing the same ULA of K antennas, as shown in Fig. <ref>. The transmitted signal consists of an OFDM waveform across S subcarriers, with an inter-carrier spacing of Hz. In the sensing channel, we consider at most possible targets. Then, the backscattered signal impinging onto the sensing receiver can be expressed over antenna elements and subcarriers as <cit.> = 1/√(S)ψ_t (θ_t) ^(θ_t) [() ⊙(τ_t)]^ + W, where ∈KS collects the observations in the spatial-frequency domains, T ∼{0,...,} is the instantaneous number of targets in the scene, and ψ_t ∼(0,^2) represents the complex channel gain of the t-th target. The steering vector of the ISAC transceiver ULA for an angular direction θ is, under no hardware impairments, [(θ)]_k= exp(- 2 π (k-(K-1)/2) d sin (θ ) / λ), k=0,...,K-1, with d = λ / 2, λ = c/f_c, c is the speed of light in vacuum and f_c is the carrier frequency[In case of different ULAs for transmitting and receiving, different steering vector models should be used in (<ref>).]. The precoder ∈ℂ^K permits to steer the antenna energy into a particular direction. Target ranges are conveyed by (τ_t) ∈ℂ^S, with [(τ_t)]_s = exp(-j2π s τ_t), s=0,...,S-1, and where τ_t = 2R_t/c represents the round-trip time of the t-th target at R_t meters away from the transmitter. Moreover, the communication symbol vector () ∈ℂ^S conveys a vector of messages ∈^S, each uniformly distributed from a set of possible messages . Finally, the receiver noise is represented by W, with [W]_i,j∼(0,N_0). Note that if T=0, only noise is received. From the complex channel gain and the noise, we define the integrated sensing SNR across antenna elements as _r = K^2/N_0. The angles and ranges of the targets are uniformly distributed within an uncertainty region, i.e., θ_t ∼[, ] and R_t ∼[, ]. However, uncertainty regions might change at each new transmission. The position of each target is computed from target angle θ_t and range R_t as _t = [ R_tcos(θ_t); R_tsin(θ_t) ]. The transmitter and the sensing receiver are assumed to have knowledge of {, , , }. In the considered monostatic sensing setup, the receiver has access to communication data (), which enables removing its impact on the received signal (<ref>) via reciprocal filtering <cit.> = ⊘^() = α_t (θ_t) ^(τ_t) +  , where α_t=1/√(S)^(θ_t) ψ_t and = W⊘^(). The goal of the sensing receiver is to estimate the presence probability of each target in the scene, denoted as û∈ [0,1]^, which is later thresholded to provide a hard estimate of the target presence, t̂∈{0,1}^. For all detected targets, the sensing receiver estimates their angles, θ̂∈ [-π/2, π/2]^, and their ranges, R̂∈_≥ 0^, from which target positions can be estimated according to (<ref>). §.§ MISO Communication In the considered ISAC scenario, communication and sensing share the same transmitter. We assume that the communication receiver is equipped with a single antenna element. In this setting, the received OFDM signal at the communication receiver in the frequency domain is given by <cit.> = [()⊙]^(φ) + , with ∈ℂ^S denoting the S-point DFT of the channel taps [β_0, β_1, ..., β_L-1,0,...,0], where each tap is distributed as β_l ∼(0,σ_l^2). Complex Gaussian noise ∼(0,N_0I_S) is added at the receiver side. The average communication SNR per subcarrier is defined as _c = ∑_l=1^Lσ_l^2/(SN_0). The communication receiver is assumed to be always present at a random position, such that φ∼[, ]. The transmitter has also knowledge of {, }. The receiver is fed with the CSI = ^(φ). The goal of the receiver is to retrieve the communication messages that were transmitted. §.§ ISAC Transmitter ISAC scenarios require the use of a radar-communication beamformer to provide adjustable trade-offs between the two functionalities. Using the multi-beam approach from <cit.>, we design the ISAC beamformer, based on a sensing precoder _r ∈ℂ^K, and a communication precoder _c∈ℂ^K, as (η,ϕ) = √(P)√(η)_r + √(1-η)e^ϕ_c/‖√(η)_r + √(1-η)e^ϕ_c ‖ , where P is the transmitted power, η∈ [0,1] is the ISAC trade-off parameter, and ϕ∈ [0,2 π) is a phase ensuring coherency between multiple beams. By sweeping over η and ϕ, we can explore the ISAC trade-offs of the considered system. The sensing precoder _r points to the angular sector of the targets, {, }, whereas the communication precoder _c points to the angular sector of the communication receiver, {, }. In Secs. <ref> and <ref>, we detail how _r and _c are computed for the baseline and MB-ML, respectively. However, the same precoding function is applied for sensing and communication, as represented in Fig. <ref>. §.§ Hardware Impairments We study the effect of hardware impairments in the ULA in the ISAC transceiver, which affect the steering vectors of (<ref>), (<ref>), (<ref>). Impairments in the antenna array include mutual coupling, array gain errors, or antenna displacement errors, among others <cit.>. Following the impairment models of <cit.>, we consider two types of impairments: * Unstructured impairments: In this case, the true steering vector (θ) is unknown for all angles θ, while the methods for beamforming design and signal processing assume the nominal steering vector (θ). If we consider a grid of possible angles with N_θ points, then the steering vectors require K× N_θ complex values to be described. * Structured impairments: In this case, the steering vector model is known, conditional on an unknown perturbation vector . We can thus write (θ;), where the meaning and dimensionality of depend on the type of impairment. In contrast to the unstructured impairments, the impairments are often described with a low-dimensional vector, independent of N_θ. [Impact of structured impairments] Consider the example of inter-antenna spacing errors, where ∈ℂ^K and [(θ; )]_k = exp(- 2 π (k-(K-1)/2) []_k sin (θ ) / λ), k=0,...,K-1. In Fig. <ref>, the angle-delay map (defined in Sec. <ref>) is depicted under ideal conditions (top) and hardware impairments (bottom), when T = 4 targets are present. The main effect of hardware impairments is to expand target lobes in the angle domain. In the example shown in Fig. <ref>, two targets become indistinguishable due to impairments, and the appearance of spurious lobes hinders the detection of the target at the highest range. Another effect of hardware impairments is that the magnitude of the target lobes is decreased, which makes them harder to differentiate from noise. These results highlight the relevance of addressing hardware impairments in our sensing scenario. § BASELINE In this section, we derive the baseline method according to model-based benchmarks, which will later be compared with end-to-end learning approaches in Sec. <ref>. §.§ ISAC Beamformer We design the baseline for the precoding mapping in Fig. <ref>, which affects both the sensing precoder _r, and the communication precoder _c in (<ref>), by resorting to the beampattern synthesis approach in <cit.>. We define a uniform angular grid covering [-π/2, π/2] with grid locations {θ_i}_i=1^. For a given angular interval (i.e., = [, ] for communications, and = [, ] for sensing), we denote by ∈1 the desired beampattern over the defined angular grid, given by []_i = K,   if  θ_i ∈θ_interval 0,   otherwise. The problem of beampattern synthesis can then be formulated as min__bs‖ - ^_bs‖_2^2, where = [(θ_1) … (θ_)] ∈K denotes the transmit steering matrix evaluated at the grid locations. This least-squares (LS) problem has a simple closed-form solution _bs = (^^)^-1^, which yields, after normalization according to the transmit power constraints, a communication-optimal beam _c or a radar-optimal beam _r, which can then be used to compute the joint ISAC beam in (<ref>). §.§ Multi-target Sensing Receiver We propose to formulate the multi-target sensing problem based on the received signal in (<ref>) as a sparse signal recovery problem <cit.> and employ the OMP algorithm <cit.> to solve it, which represents our model-based benchmark. To construct an overcomplete dictionary for OMP, we specify an angular grid {θ_i}_i=1^ and a delay grid {τ_j}_j=1^ depending on the region of interest for target detection (i.e., the a priori information {, , , }). Then, a spatial-domain and a frequency-domain dictionary covering angular and delay grids can be constructed as _a = [ (θ_1)  ⋯ (θ_) ] ∈K , _d = [ (τ_1)  ⋯ (τ_) ] ∈S . Using (<ref>), the problem of multi-target sensing based on the observation in (<ref>) becomes a sparse recovery problem = ∑_i=1^∑_j=1^ []_i,j [_a]_:, i ([_d]_:, j)^ +  , where ∈. Here, the goal is to estimate the T-sparse vector ∈1 under the assumption T ≪. The baseline OMP algorithm <cit.> to solve this problem is summarized in Algorithm <ref>, which will serve as a foundation to the proposed MB-ML approaches in Sec. <ref>. §.§ Communication Receiver We assume that the communication receiver has access to the CSI = ^(φ). Hence, the received signal can be expressed as = ⊙() +. Optimal decoding in this case corresponds to subcarrier-wise maximum likelihood estimation according to _s = min_m_s ∈[]_s - []_s x(m_s)^2, for s=0,...,S-1. Since communication decoding is already optimal, given the CSI, learning methods described in Sec. <ref> apply (<ref>) for communication message estimation. § MODEL-BASED LEARNING MB-ML is inspired by the baseline of Sec. <ref>, although we need to develop differentiable beamforming and estimation algorithms that permit end-to-end learning, as well as a suitable loss function for multiple targets. This section describes the two MB-ML methods developed for multi-target sensing: (i) dictionary learning, which learns a dictionary of steering vectors for different angles as in <cit.>, and is suitable for unstructured impairments, as defined in Sec. <ref>; (ii) impairment learning, which directly learns a parameterization of the hardware impairments and thus is suitable for structured impairments, also defined in Sec. <ref>. This section also defines the loss function to train them. §.§ Beamformer MB-ML follows the same operations (<ref>) and (<ref>) to compute the precoding vector _r or _c, given an angular interval . Dictionary learning considers ∈ℂ^K× N_θ from (<ref>) as a free learnable parameter to account for unstructured impairments, which is comprised of KN_θ complex parameters. The new proposed impairment learning considers instead as a free learnable parameter the vector ∈ℂ^K, which represents a parameterization of the structured hardware impairments. From , the dictionary of steering vectors is computed as () = [(θ_1;) … (θ_;)], such that () is used in (<ref>) instead of . Impairment learning reduces the number of learnable parameters by taking into account the structured hardware impairments of Sec. <ref>. Indeed, it has only K complex parameters, which can be several order of magnitudes less than the dictionary learning approach, since the dictionary of steering vector needs a relatively large number of columns N_θ to perform well. Note that the operation in (<ref>), which involves the learning parameters of both MB-ML methods, is already differentiable. §.§ Sensing Receiver Range-angle estimation of targets is based on Algorithm <ref>. However, the max operation in line <ref> of Algorithm <ref> is not differentiable and the gradient of no loss function could be backpropagated in MB-ML. To circumvent this issue, we develop a differentiable algorithm which is represented in Fig. <ref>. The difference with the conventional OMP in Algorithm <ref> is that we replace the operations of lines <ref>-<ref> by the following steps: * max_i,j: We still perform this nondifferentiable operation as a temporary result to obtain the final estimation. Note that is based on an angular grid ={θ_i}_i=1^ and a delay grid ={τ_j}_j=1^. In line <ref> in Algorithm <ref>, this calculation yields the estimated angle-delay pair, which serves as foundation for the following step of the differentiable OMP algorithm. * Mask the angle-delay map, , based on angle and range resolution: in order to consider elements of that solely correspond to a single target, we select the elements around the maximum of the angle-delay map that are within the angle and range resolution. This operation also helps to obtain a differentiable angle-delay estimation, similar to line <ref> in Algorithm <ref>. We create the mask based on the angle and range resolution, since it determines the minimum angle or range for which two targets are indistinguishable. The angle and range resolutions in our case are ≈2/K ≈c/2B = c/2S, with B the bandwidth of the transmitted signal. The resolutions are considered in terms of the number of pixels of the angle-delay map, depending on and . * Softmax: We apply a softmax operation to the masked matrix from the previous operation, so that the sum of its elements is equal to 1. Unlike line <ref> in Algorithm <ref>, the softmax function is differentiable, enabling end-to-end learning. * Weighted sum: A weighted sum of and is implemented, where each weight corresponds to the output of the previous softmax operation, and they represent an estimate of the probability that a certain angle-delay pair is the true value. From this interpolation operation, an angle-delay pair (θ̂_I, τ̂_I) is obtained, which may not be included in or . From this computation, the angle-delay pairs are updated, as in line <ref> in Algorithm <ref>. Note that these four first steps (center column of Fig. <ref>), amount to looking in the dictionary for the most correlated atoms with the input, and then estimating the angle-delay pair as a convex combination of the corresponding angle-delays on the grid. This kind of similarity-based learning has been applied to other tasks within MIMO systems <cit.>, and is reminiscent of the attention mechanism <cit.>. * Compute estimated spatial-domain and frequency-domain vectors (θ̂_I), (τ̂_I): unlike line <ref> in Algorithm <ref>, we recompute the spatial-domain and frequency-domain vectors based on the estimated angle-delay pair of the previous step, since the estimated angle-delay pair (_I, _I) might not be contained in (, ). The sets _a and _d are updated with the new vectors, as represented in Fig. <ref>. After the previous steps, differentiable OMP continues as lines <ref>-<ref> in Algorithm <ref> to obtain the new residual ^(I+1), as depicted in Fig. <ref>. This differentiable OMP algorithm still involves looking over a grid of possible angles. We utilize as the dictionary of angles _a the same matrices and () from the beamformer of Sec. <ref> to compute , which allows parameter sharing between the co-located transmitter and receiver. The gradient of the loss function does not flow through the max operation, as illustrated in Fig. <ref>. To further improve memory efficiency, gradient flow is also discarded when computing the new residual ^(I+1) from the estimates (_I, _I). §.§ Loss Function As loss function for MB-ML multi-target sensing, we select the GOSPA loss from <cit.>. In our case, the GOSPA loss is defined as follows. Let γ>0, 0<μ≤2 and 1≤ p < ∞. Let = {_1, ..., _||} and = {_1,...,_||} be the finite subsets of ℝ^2 corresponding to the true and estimated target positions, respectively, with 0≤||≤, 0≤||≤. Let d(, ) = ‖ - ‖_2 be the distance between true and estimated positions, and (, ) = min(d(, ),γ) be the cut-off distance. Let Π_n be the set of all permutations of {1,...,n} for any n ∈ℕ and any element π∈Π_n be a sequence (π(1),...,π(n)). For || ≤ ||, the GOSPA loss function is defined as d_p^(γ,μ)(, ) = ( min_π∈Π_||∑_i=1^||(_i, _π(i))^p + γ^p/μ (-) )^1/p. If > , d_p^(γ,μ)(, ) = d_p^(γ,μ)(, ). The parameter p is proportional to the penalization of outliers, and the value of γ dictates the maximum allowable distance error. The role of μ, together with γ, is to control the detection penalization. This loss function becomes suitable for multiple targets, since it considers the association between estimated and true positions that gives the minimum loss, tackling the data association problem of multiple targets. In terms of target detection, we follow the same principle as the baseline, i.e., we stop the OMP algorithm when the maximum of the angle-delay map drops below a threshold. Sweeping this threshold over different values yields a trade-off in terms of detection and false alarm rates. § RESULTS This section details the simulation parameters and the results for single- and multi-target ISAC.[Source code to reproduce all numerical results in this paper will be made available at <https://github.com/josemateosramos/MBE2EMTISAC> after the peer-review process.] Four methods will be evaluated and compared: * The model-based baseline from Sec. <ref>, working under the mismatched assumption of no hardware impairments. * A NNBL method, extending <cit.>, which replaces the precoding and sensing estimation mappings in Fig. <ref> by NN, and can operate in the absence of any knowledge of the ISAC system (including the hardware impairments). More details can be found in Appendix <ref>. * Dictionary learning from Sec. <ref>, where the unstructured impaired steering vectors (θ) are learned for both precoding and sensing. * Impairment learning from Sec. <ref>, where the structured impairment vector d is learned for precoding and sensing. §.§ Simulation Parameters We consider a ULA of K=64 antennas, S=256 subcarriers, and a subcarrier spacing of 120 kHz. We set the maximum number of targets in the scene as = 5. The transmitted power is P=1 and the carrier frequency is f_c = 60 GHz. The sensing SNR across antenna elements was set to _r = K^2/N_0 = 15 dB, and the average communication SNR per subcarrier was fixed to _c = ∑_l=1^Lσ_c,l^2/(SN_0) = 20 dB. The number of channel taps in the communication channel is L=5, with an exponential power delay profile, i.e., σ_l^2 = exp(-l), l=0,...,L-1. The power delay profile is later normalized to obtain the desired average SNR. The number of grid points for angle and range is set as = 720 and =200. To train the learning methods for a wide range of angles, we randomly draw {, } as in <cit.>, i.e., we draw a realization of ∼[-60, 60] and Δ∼[10, 20], for each new transmission. The target angular sector is computed as =  - Δ/2, =  + Δ/2. The communication angular sector and the range uncertainty region are set as {, } = {30, 50}, {, } = {10, 190} m, for all transmissions. For hardware impairments, we consider the model of <cit.>, i.e., we assume structured hardware impairments where the antenna elements in the ULA array are spaced as ∼((λ/2) 1, ^2I_K). We select a standard deviation of = λ/25 = 0.2 mm. MB-ML is initialized with the same knowledge as the baseline, i.e., the steering vector models firstly assume that d=(λ/2) 1. In the GOSPA loss, we set μ=2, as recommended in <cit.>, p=2, and γ = (-)/2=90 m. The cardinality mismatch term in (<ref>) implies the use of a threshold during training. However, our goal is to train the learning methods regardless of the threshold, and then explore sensing performance by changing the threshold. Hence, during training it is assumed to know the actual number of targets T, which means that || = || = T, and the GOSPA loss during training becomes d_p^(γ,μ)(, ) = (min_π∈Π_||∑_i=1^||(_i, _π(i))^p)^1/p. However, there is no detection penalization term in (<ref>), which implies that the detection probability estimation NN of NNBL cannot be optimized. Hence, we adopt a two-step training approach for NNBL, as follows: * We first train and based on the simplified GOSPA loss of (<ref>). * While freezing the parameters ξ, we then train and by minimizing d_u^(γ_u,μ)(, ) = (min_π∈Π_||∑_i=1^|| d^(γ_u)(u_i, û_π(i))^p)^1/p, where = {u_1, ..., u_||} and = {û_1, ..., û_||} are the true and estimated sets of target probabilities, d^(γ_u)(u_i, û_π(i)) = min(d(u_i, û_π(i)),γ_u), and d(u_i, û_π(i)) = -u_ilog(û_π(i)) - (1-u_i)log(1-û_π(i)). That is, we replace the position distance error in (<ref>) with the BCE loss. Note that in (<ref>) we also assume that ||=||=T. The previous two-step training approach was observed to yield better performance, compared to joint training of all NN parameters ε, ξ, ζ based on the sum of the losses (<ref>) and (<ref>). Network optimization is performed using the Adam optimizer <cit.>, with a batch size of B=3000 and 100,000 training iterations. The learning rate of dictionary and impairment learning was set to 5·10^-3 and 10^-7, respectively. In the two-step training approach for NNBL, 100,000 training iterations are applied to each of the steps. Position estimation training used a learning rate of 10^-2, while target detection utilized 10^-3 as learning rate. The architecture of NNBL is described in Appendix <ref>. NNBL also benefited from using a scheduler, to reduce the learning rate when the loss function has reached a plateau. Details of the scheduler parameters can be found in Appendix <ref>. §.§ Performance Metrics Concerning testing, we compute as detection performance metrics a measure of the probability of misdetection and the probability of false alarm, for multiple targets. We use the same definitions as in <cit.>, which correspond to = 1-∑_i=1^Bmin{T_i, _i}/∑_i=1^B T_i, = ∑_i=1^B max{T_i, _i} - T_i/∑_i=1^B - T_i, where T_i, _i are the true and estimated number of targets in each batch sample, respectively. The regression performance is measured via the GOSPA (for multiple targets sensing) and RMSE (for single target sensing). As communication performance metric, we use the average SER across subcarriers, computed as SER = 1/BS∑_i=1^B ∑_j=1^S 𝕀{[_i]_j ≠ [_i]_j}, with _i and _i the true and estimated message vectors at the i-th batch sample. All described methods in this paper (baseline of Sec. <ref>, MB-ML of Sec. <ref>, and NNBL) use a QPSK encoder, and the message estimation rule in (<ref>). §.§ Single-target ISAC In single-target ISAC, the maximum number of targets is =1, which implies that the GOSPA loss function in (<ref>) becomes (, ). However, in order to compare with our previous work <cit.>, we train MB-ML and position estimation of NNBL using the MSE loss d(, )^p = - _2^2, and detection estimation of NNBL using the BCE loss, d(u, ) = -ulog() - (1-u)log(1-). Position estimation is assessed by the angle RMSE, √([(θ-θ̂)^2]), and the range RMSE, √([(R-R̂)^2]). ISAC performance results are represented in Fig. <ref>, where we sweep over [0,1] and [0,7π/4], taking 8 uniformly spaced values, to set η and ϕ in (<ref>), respectively. For testing, we fixed {, } = {-40, -20}[Unless otherwise stated, the authors also tested other values of {, }, and the results were qualitatively the same.]. The probability of false alarm was set to = 10^-2. Result show that under no complexity limitations (solid lines) and hardware impairments, learning methods outperform the baseline in terms of misdetection probability, angle and range estimation, and SER, which implies that learning methods have adapted to hardware impairments. Communication performance, even in the case of optimal symbol estimation, is enhanced by learning approaches, which suggests that the impairments have a significant impact on the optimal communication precoder. In addition, dictionary learning outperforms NNBL for range estimation, although the converse happens for misdetection probability. Impairment learning yields the best performance among all learning methods, and with fewer parameters, which usually implies less training time. Indeed, NNBL is composed of a total of 7.78 million real learnable parameters, while dictionary learning uses K = 40,080 complex parameters, and impairment learning consists of K=64 complex parameters. Under limited complexity, the number of parameters of dictionary learning and NNBL are restricted. We follow the approach of <cit.>, and restrict the number of (complex) parameters of dictionary learning by setting = 156, which reduces the number of parameters to 9,984 complex parameters. The complexity constraints applied to NNBL-learning are detailed in Appendix <ref>, which decreases the number of real parameters to 10,555. From Fig. <ref>, it is observed that while NNBL drops in performance, especially for angle and range estimation, dictionary learning still yields better results than the baseline. However, dictionary learning also decreased in performance compared to the unconstrained approach, which means that dictionary learning cannot achieve the same performance as impairment learning for the same number of parameters. Lastly, we test all learning approaches for a scenario that was not encountered during training, to assess their generalization capabilities. Fig. <ref> depicts the performance of the learning methods for {, } = {-20, 20}, which includes a span of the angular uncertainty region wider than expected. The complexity of the networks is not restricted. The performance of all learning approaches has dropped compared to Fig. <ref>. However, while NNBL performs worse than the baseline, and dictionary learning yields similar results to the baseline, impairment learning is the only approach that still outperforms the baseline. NNBL and dictionary learning appear to overfit to the training data and degrade for unexpected inputs. This means that for new testing scenarios, impairment learning is the learning approach that best generalizes in terms of performance. This is due to the fact that impairment learning is the only method for which parameters are shared between all directions (all columns of the dictionary are affected each time the parameters are updated). Dictionary learning does not exhibit this feature, since each column of the dictionary (corresponding to a direction) is considered an independent set of parameters. §.§ Multi-target ISAC Based on the results of Sec. <ref>, impairment learning performs the best among all considered learning methods for the simpler case of single-target ISAC. Hence, we only consider impairment learning to compare against the baseline for multi-target sensing. The batch size for MB-ML is decreased to B=1500 due to memory restrictions. The number of iterations was also reduced to 25,000, since finding the association between estimated and true data that minimizes the GOSPA loss of (<ref>) increases training time. In addition, ISAC results perform very close to perfect knowledge of impairments, as observed in the following. We first compare the performance of the differentiable OMP algorithm of Sec. <ref> with the baseline, when hardware impairments are perfectly known. In Fig. <ref>, the sensing performance of both approaches is depicted. Results show that differentiable OMP performs closely to the baseline. The difference in performance might be because the dictionary _a in the baseline only covers the angular range {, }, while differentiable OMP uses a fixed dictionary that covers [-π/2, π/2]. However, this allows for efficient parameter sharing in MB-ML. Differentiable OMP takes a weighted sum of angles and ranges, which permits to select an angle or range outside the predefined dictionaries, unlike the baseline. The GOSPA loss in Fig. <ref> achieves a minimum for different false alarm probabilities, since it takes into account both position and detection errors. For high , OMP estimates a higher number of targets than the true value, and conversely for low . Fig. <ref> shows the results of the baseline without impairment knowledge, differentiable OMP with perfect impairment knowledge, and impairment learning. Impairment learning outperforms the baseline, which illustrates the adaptability of impairment learning to antenna imperfections in multi-target sensing. Moreover, the performance is very close to perfect knowledge of the impairments, which suggests that the learned spacing is quite similar to the underlying reality. In terms of ISAC trade-off, Fig. <ref> presents the ISAC trade-offs in case of multiple targets when = 10^-2. In this case, we sweep in (<ref>) over η and fixed ϕ = 0, since in Figs.<ref> and <ref> we observed that the effect of ϕ is not very significant. Compared to Fig. <ref>, it is observed that impairment learning also outperforms the baseline when impairments are not known in terms of communication performance, due to the impact of hardware impairments in the communication precoder. § CONCLUSIONS In this work, we studied the effect of antenna spacing impairments in multi-target ISAC, and different learning approaches to compensate for such impairments. A new efficient MB-ML approach to perform end-to-end learning and impairment compensation was proposed, based on a differentiable OMP algorithm. Simulation results showed that learning approaches outperform the baseline and they can compensate for hardware impairments. Among learning methods, the new proposed impairment learning approach outperformed all other considered methods, also exhibiting better generalization capabilities to new testing data, with much fewer parameters to optimize. Simulations results verify that injection of the system and impairment knowledge in learning methods improves their performance and reduces their complexity. § NNBL Since the optimal detection and estimation rules might not be tractable, NNBL can be trained based on data to achieve optimality. Moreover, when no information about the impairments is available, NNBL can provide data-driven solutions to account for them. This appendix describes the principles and architecture of the considered NNBL approach. §.§ Principles NNBL replaces the precoding and sensing estimation mappings in Fig. <ref> by NN. The precoding network, :^2→^2K, takes as input and produces a precoder as output, where ε corresponds to the learnable parameters. NN in this work are considered to work with real-valued numbers, hence, the output dimension is doubled. The same mapping is applied to both sensing and communication precoders, to obtain _r and _c, which are later used to design the ISAC precoder according to (<ref>). Sensing estimation is divided into two tasks, each corresponding to a different NN: (i) detection probability estimation, and (ii) position estimation. As input to both NN, we use ∈^× defined in Sec. <ref>, instead of , since we observed a better sensing performance. In addition to the angle-delay map, the input is also composed of the a priori information {, , , }, as shown in Fig <ref>, to improve network performance. The output of each NN is task-dependent. The detection probability network, : ^××^4→ [0,1]^, outputs a probability vector û whose elements correspond to the probability that each target is present in the scene, which is later thresholded to provide an estimate of the number of targets. The position estimation network, : ^××^4→^×2, outputs a matrix P̂ whose columns represent the position estimation of each potential target. The learnable parameters of each network are ζ and ξ, respectively. Both NN are trained based on the GOSPA loss function of Sec. <ref>. §.§ NN Architectures The precoding operation of Fig. <ref> was implemented as a MLP, whose input is an angular sector ({, } or {, }), with 3 hidden layers of 8K neurons and an output layer of 2K neurons, where we recall that K=64 is the number of antennas in the ULA transceiver. The activation function after each layer is the ReLU function, except for the final layer, which contains a normalization layer to ensure a unit-norm output, i.e., ‖_bs‖_2=1. For the receiver side, we resort to CNN given the 2-dimensional nature of the input , as represented in Fig. <ref>. The receiver architecture repeats a set of layers, represented in Fig. <ref>, which we call residual bottleneck block. This block was inspired by the ResNet architecture <cit.>. A convolutional layer is first introduced with some stride to decrease the number of pixels to process. Then, 2 bottleneck blocks with skipped connections similar to <cit.> follow. However, we reduce the number of activation functions and normalization layers, as suggested in <cit.>. Another residual connection is introduced from the beginning to the end of both bottleneck blocks to help with gradient computation. We observed that splitting position estimation into angle and range estimation, each of them involving a CNN, yielded better results than using a single network. Angle and range estimates are later combined into a position vector following (<ref>). The common architecture for all CNN (detection, angle and range estimation) is shown in Table <ref>. Convolutional layers introduce zero-padding so that the number of pixels is preserved. After the first and last convolutional layers, a 2-dimensional batch normalization and a ReLU activation function are also applied. The resulting feature map of the CNN has / 2^12 elements. For NNBL, = 320 and = 128 due to memory constraints. The resulting feature map from the convolutional layers, together with the a priori information {, , , } of the target locations, are processed by MLP. The angle estimation network only uses {, }, the range estimation network {, }, and the detection network utilizes both of them. The architecture of each MLP is described in Table <ref>. The activation function after each fully-connected layer is the ReLU function. Unless stated otherwise, all NN architectures were optimized to give the best ISAC performance, where we explored, for instance, kernel sizes up to 13x13, the number of residual bottleneck blocks from 3 to 7, or the number of layers of the MLP of Table <ref>, from K to 64K, among others. When training NNBL, a scheduler is used to reduce the learning rate if the loss function plateaus. The patience of the scheduler was set as 10^4 iterations. If the loss function was regarded to plateau, the learning rate was decreased by half, with a minimum attainable learning rate of 10^-6. When complexity limitations are considered, in the transmitter network the number of neurons in each hidden layer was reduced to 4. At the receiver side, the kernel size of the Maxpool layer is increased to 4x4, the number of residual bottleneck blocks is changed from 6 to 3, the number of channels in the network is reduced by a factor of 4, and the number of neurons in the hidden layer of the last MLP are constrained to 4. IEEEtran
http://arxiv.org/abs/2307.04941v1
20230711000328
MG3MConv: Multi-Grained Matrix-Multiplication-Mapping Convolution Algorithm toward the SW26010 Processor
[ "Zheng Wu" ]
cs.DC
[ "cs.DC" ]
address1]Zheng Wumycorrespondingauthor [email protected] [mycorrespondingauthor]Corresponding author [address1]University Of Science And Technology Of China, Shushan Qu, Hefei, China As the core of artificial intelligence applications, the research of convolution has become a hot topic in high performance computing. With the rapid development of the emerging SW26010 processor in artificial intelligence, there is an urgent need for high-performance convolution algorithms on the processor. However, the current support of convolution on SW26010 is still rudimentary. The only studies provide sufficient runtime peak performance but lack the adaptability to various convolution scenes. To perfect convolution algorithms on SW26010, we propose a multi-grained matrix-multiplication-mapping convolution algorithm called MG3MConv, which targets the architectural features of SW26010. MG3MConv supports diversified mapping schemes of convolution tasks based on the concept of the thread block proposed in this paper. All the architecture-oriented optimization methods are elaborately designed from four levels to fully exploit the hardware efficiency of SW26010. The experiments show that the hardware efficiency of MG3MConv can reach 84.78% in max, which is 1.75 times compared with that of cuDNN based on NVIDIA K80m GPU. Moreover, MG3MConv can overperform cuDNN in most convolution scenes. We also use six representative CNNs as real-world cases, and the hardware efficiency of MG3MConv reaches up to 67.04% on the VGG network model, which is 1.37 times and 1.96 times that of cuDNN and swDNN, respectively. § INTRODUCE Deep learning has vastly promoted the development of artificial intelligence. As one of the most successful neural network models in deep learning, CNNs (convolutional neural networks) are widely used in numerous fields <cit.> such as computer vision, speech recognition, natural language processing, automatic driving, intelligent medical health. The execution time of CNNs becomes long and unacceptable as larger data sets and more complex CNNs emerge. Because convolution accounts for more than 90% of the total computation in CNNs <cit.>, highly efficient convolution algorithms on many-core processors have become a popular research direction in academia and industry. Nowadays, GPUs and CPUs are the most mature many-core processor platforms for CNNs. Many studies are devoted to improving the performance of convolution on GPUs <cit.> and CPUs <cit.>, which has promoted the perfection of deep neural network libraries such as NVIDIA cuDNN <cit.> and Intel MKL-DNN <cit.>. Furthermore, the acceleration of convolution algorithms on other hardware platforms also attracts researchers to participate, such as Cambrian's DianNao series <cit.>, Google's TPU <cit.>, and SW26010 <cit.>. As the main contributor to the computational power of the world-class Sunway TaihuLight supercomputer, SW26010 <cit.> has several special architectural features such as user-controllable memory hierarchy, asynchronous direct memory access (DMA), on-chip register communication, and double-pipeline instruction execution. These features provide great potential for running artificial intelligence applications based on CNNs. However, the current support of convolution on SW26010 is still rudimentary. The existing studies <cit.> deploy optimization methods by simply mapping convolution tasks into the whole CG (core group). They continually enhance the runtime peak performance of algorithms but rarely consider the adaptability for changeable convolution scenes. Significantly, the situation is the poorest when the batch number and channel number are small. Moreover, due to some limitations of SW26010 <cit.>, the research on the commonly used single-precision convolution has been very lacking. In this paper, we propose a multi-grained matrix-multiplication-mapping convolution algorithm called MG3MConv. Unlike the existing studies, MG3MConv employs diversified mapping schemes of convolution tasks instead of the humdrum mapping scheme based on the whole CG, which can more effectively cope with different convolution scenes. This paper mainly aimed at optimizing and implementing single-precision convolution on SW26010 to make up for the lack of relevant work. Referring to some parallel optimization methods on SW26010 <cit.>, we conduct more comprehensive and fine-grained designs for MG3MConv. The main contributions of our work can be summarized as follows: * We propose MG3MConv, which employs the multi-grained mapping scheme of convolution tasks to deal with various convolution scenes. * We simulate a new concept, the TB (thread block), between the CG and the thread by software, and then manually divide one CG into multiple TBs to assist the multi-grained mapping scheme of MG3MConv. Moreover, this paper integrates many architecture-oriented optimization technologies from four levels (CG-level, TB-level, thread-level, and instruction-level), such as double suffering, on-chip data sharing, and instruction reordering. * This paper conducts experiments from two perspectives: (1) adaptability; (2) practicality. The hardware efficiency of MG3MConv is 84.78% in max, which is 1.75 times that of cuDNN on NVIDIA K80m GPU. Moreover, in most convolution scenes, MG3MConv performs better than cuDNN. For representative CNNs, MG3MConv has the hardware efficiency of 67.04% on VGG, which is 1.37 times and 1.96 times that of cuDNN and swDNN, respectively. Finally, we organize an additional experiment to demonstrate the superiority of the multi-grained mapping scheme of MG3MConv. The rest of this paper is organized as follows. Section 2 presents the background of CNNs and the SW26010 architecture. Section 3 discusses the related work. Section 4 presents the details of implementing MG3MConv. Section 5 evaluates the proposed convolution algorithm. Section 6 concludes the paper. § BACKGROUND §.§ Convolutional Neural Networks Because of the benefits such as weight sharing, sparse interaction, and equivalent representation, CNNs stand out from many deep neural network models and promote the rapid development of computer vision, speech recognition, natural language understanding, and other fields <cit.>. Convolutional layers are exceedingly significant for CNNs. Their huge computation cost has led to high demand to optimize convolution algorithms for high performance. <Ref> shows that the convolutional parameters are symbolically defined to facilitate the subsequent description. Input, filter, and output are denoted as 𝐈𝐍, 𝐅𝐋𝐓, and 𝐎𝐔𝐓, respectively. The training process of CNNs is the iteration of batch after batch of sample data, thus continuously improving the model quality, and here we label the batch number as B. Moreover, 𝐈𝐍 has IC channels, each of which can be viewed as an input feature map with size inH × inW. Similarly, 𝐎𝐔𝐓 consists of OC channels with each channel corresponding to an output feature map with size outH × outW. 𝐅𝐋𝐓 has OC × IC filters, and the size of each filter is fltH × fltW. In addition, we denote the height and width of padding by padH and padW, respectively. Similarly, the height and width of stride are denoted as stdH and stdW. For the elementary convolutional computation, IC filters convolution IC input feature maps one to one, and then an output feature map can be obtained by accumulating the IC partial results. Therefore, one convolution requires OC × IC filters. The overall process of convolution can be simplified to a tensor multiplication and accumulation routine about 𝐈𝐍, 𝐅𝐋𝐓, and 𝐎𝐔𝐓 as <Ref>. As shown in <Ref>, the most simple convolution algorithm <cit.>, called direct convolution, is based on seven nested loops. §.§ SW26010 Architecture SW26010 <cit.> is a heterogeneous many-core processor independently developed by the Shanghai National High Performance Integrated Circuit Design Center. <Ref> shows its detailed architecture. The processor adopts Shenwei-64 Instruction Set, which integrates 260 cores operating at 1.45GHz. SW26010 is able to provide the theoretical peak performance of 3.06TFlops. All the cores are uniformly distributed across four equivalent CGs. Each CG consists of one MPE (management processing element) and 64 CPEs (computing processing elements). The 64 CPEs are organized as an 8x8 grid called the CPE cluster. The four CGs are interconnected via a NoC (network on chip) and support 32GB DDR3 memory. Each CG is directly connected to 8GB memory via a private MC (memory controller). The MPE handles management and communication functions, while the CPE is mainly used to process computational tasks. An MPE has two levels of private cache, including a 32KB L1 instruction cache, a 32KB L1 data cache, and a 256KB L2 cache. Similarly, a CPE has a 16KB L1 instruction cache and a 64KB SPM (Scratchpad Memory) called LDM (Local Device Memory). The LDM can be regarded as a user-controllable fast buffer, and different LDM usage strategies will lead to different DMA efficiency. The 64 CPEs of one CG share a direct-mapped L2 instruction cache of 64KB. SW26010 has many unique features in computation and data access. From the perspective of computation, both the MPE and the CPE support 4-channel floating-point vector computations and fused-multiply-add instructions. However, the MPE has two floating-point units and an instruction pipeline, while the CPE has one floating-point unit and two pipelines, P0 and P1. The P0 is used for scalar/vector computational operations of both floating-point and integer, while the P1 is for data transfer, comparison, jump, and integer scalar operations. From the perspective of data access, two key technologies are adopted to relieve the pressure of off-chip data access on SW26010. One is two kinds of data access from the main memory to the LDM, gld/gst discrete memory access and DMA batched memory access. The former can directly read and write the main memory, while the latter employs the LDM as a bridge to indirectly access the main memory. Stream Triad Test <cit.> shows that both bandwidths can reach up to 1.48GB/s and 22.6GB/s, respectively. The other is register communication, which enables data sharing among the 64 CPEs of one CG. Each CPE is equipped with a sending buffer, a row receiving buffer, and a column receiving buffer, which can contain 6, 4, and 4 register messages, respectively. There are three attention points about the register communication mechanism: (1) the data size is fixed at 256-bits each communication; (2) each CPE only communicates with CPEs of the same row or the same column; (3) the communication is anonymous, and target CPEs receive messages based on the FCFS (first-come-first-serve) principle. § RELATED WORK There are four mainstream convolution algorithms, called direct, GEMM-based, FFT-based, and Winograd-based convolutions. As described in Section 2.1, direct convolution is easy to implement but is difficult to optimize because of its poor data locality. Due to the successful matrix multiplication libraries on many hardware platforms, GEMM-based convolution has become a popular method to accelerate the convolutional process, divided into explicit <cit.> and implicit ones <cit.>. Explicit GEMM-based convolution needs to extract the input, and then fill input matrices with size ( IC× fltH× fltW ) ×( outH× outW ) according to filter matrices with size OC×( IC× fltH× fltW ). The algorithm maximizes the performance of matrix multiplications in convolution, but at the cost of abundantly extra memory and data access. Implicit GEMM-based convolution converts direct convolution into multiple small matrix multiplications by exploiting the potential matrix multiplication relation based on B, IC, and OC. Small matrices can be loaded directly into on-chip storage to avoid unnecessary off-chip memory occupation. Moreover, a suitable data format <cit.> can even put the cost of extra data access zero. Unlike GEMM-based convolution, both Winograd-based and FFT-based convolution reduce the computation complexity of convolution. FFT-based convolution <cit.> converts the input and filter into the frequency domain space, completes those matrix multiplications <cit.>, and converts the result back into the time domain space to get the final convolutional result. The algorithm can reduce the computation complexity of convolution from O( outH^2× fltH^2 ) to O( outH^2×log outH ) <cit.>. However, the process requires expanding the filter size to the size of input feature maps, which is highly unfriendly for CNNs with small-filter convolution. Winograd-based convolution <cit.> can reduce the computation complexity to O( ( outH+fltH-1 ) ^2 ). The disadvantage is too inflexible. Its data transformation process changes with the filter size and strictly restricts the stride size. In addition, FFT-based and Winograd-based convolution will consume amounts of memory to store intermediate data. There are many excellent studies on optimizing convolution algorithms. Li et al. <cit.> optimized direct convolution by register partitioning, and the performance in large-filter cases was improved by 33% compared with cuDNN. Park et al. <cit.> proposed ZeroSkip and AddOpt to optimize convolution. The experiments show that the enhanced Winograd-based convolution using ZeroSkip has a performance improvement of 51.8% compared with the basic Winograd-based one. Vasudevan et al. <cit.> presented a GEMM-based convolution without im2col operations, eliminating the input replication. In most selected layers of GoogLeNet, VGG-16 and AlexNet, the result is evaluated on Intel® Core™ i5-4570 and is better than MKL-DNN. Wang et al. <cit.> proposed a novel implicit im2bcol+IMM convolution to fuse im2col into matrix multiplication, which dedicated the effort to alleviate extra memory consumption and data access consumption. Li et al. <cit.> proposed a coordination tiling and batching framework for efficient batchedGEMM on GPUs. The framework is mainly composed of a tiling engine and a batching engine. Using GoogleNet as a real-world scene, the test achieved x1.24 speedup. Kasagi et al. <cit.> substituted a single layer for a pair formed by a convolutional layer and the following average-pooling layer. The forward performance of ResNet-34 has x17.1 speedup on Intel Core i7-6700k, while the backward x9.17. Kateoka et al. <cit.> presented the convolution-pooling computation technique using the direct sum computation instead of the SATs of Kasagi et al. <cit.>, considering the small pooling size is used in CNNs. Except for NVIDIA GPUs and Intel CPUs, the emerging many-core SW26010 processor has also attracted researchers, but a few studies have been done for convolution on SW26010. Among the existing studies <cit.>, Fang et al. <cit.> rescheduled and mapped the seven nested loops of direct convolution to four CGs. The performance of double-precision convolution is up to 54% of the theoretical peak performance. Zhao et al. <cit.> introduced the support of single-precision convolution based on the study of Fang et al. <cit.>, but the performance is far lower than of double-precision convolution. Reordering the kernel instruction queue and reducing the data access cost of DMA, Zhang et al. <cit.> further optimized the double-precision convolution implementation on SW26010 and achieved 81% of the theoretical peak performance on the best case. However, the current support for convolution on SW26010 is still rudimentary. These efforts excessively focus on maximizing the peak performance of double-precision convolution while ignoring commonly used single-precision one and changeable convolution scenes in CNNs, which is contrary to real-world applications. This paper will solve the shortages of performance and adaptability for single-precision convolution to satisfy applications using CNNs on SW26010. § IMPLEMENTATION AND OPTIMIZATION OF CONVOLUTION Given the following two points: (1) SW26010 has limited main memory capacity and high-overhead memory access; (2) the support of convolution on SW26010 is still rudimentary, we choose the implicit GEMM-based convolution as the basis of our work. The values of B, IC, and OC are often small in CNNs, so convolution implementations that directly call matrix multiplication interfaces are inefficient according to the research <cit.>. Therefore, we design a novel parallel convolution algorithm called MG3MConv. Unlike the traditional optimization methods on SW26010, this paper puts forward the concept of the thread block, called TB, between the CG and the thread. We realize TB by software simulation to assist the implementation of MG3MConv. Therefore, the guiding ideology of this paper is divided into four levels: CG-level, TB-level, thread-level, and instruction-level optimization. §.§ CG-level Optimization CG-level optimization aims to efficiently organize and map convolution tasks in MG3MConv. §.§.§ Matrix-multiplication convolution A three-layer nested cycle of B, IC, and OC remains after hiding fltH, fltW, outH, and outW in direct convolution. Further, we place B, IC, and OC in low dimensions to improve the data locality. Therefore, this paper designs the data layout of 𝐈𝐍 as [inH,inW,IC,B], 𝐅𝐋𝐓 as [fltH,fltW,IC,OC], and 𝐎𝐔𝐓 as [outH,outW,OC,B]. The default data type is single precision, commonly applied to real-world CNNs <cit.>. The convolutional process without fltH, fltW, outH, and outW is as follows in <Ref>: 𝐎𝐔𝐓[ oc,b ] +=∑_ic^IC-1𝐅𝐋𝐓[ ic,oc ] ×𝐈𝐍[ ic,b ] <Ref> can be regarded as matrix multiplication operations with transposition. We mark it as MM_unit to distinguish from matrix multiplication in BLAS. Thus, the following <Ref> can be obtained. <Ref> views 𝐈𝐍 as an array of size inH× inW. Each element is marked 𝐈𝐍_mtx with a size of IC× B. Similarly, 𝐅𝐋𝐓 is an array of size fltH× fltW, where the size of each element is IC× OC. 𝐎𝐔𝐓 is an array of outH× outW corresponding to each element with a size of OC× B. The elements of 𝐅𝐋𝐓 and 𝐎𝐔𝐓 are marked as 𝐅𝐋𝐓_mtx and 𝐎𝐔𝐓_mtx, respectively. Therefore, by applying matrix multiplications as convolution task units, we redesign the seven-layer loop of direct convolution into a four-layer loop. The redesigned algorithm has more complex computational processes and data relationships than matrix multiplication. Through exploring the computational processes and data relationships, we implemented the highly optimized convolution algorithm MG3MConv. §.§.§ Multi-grained mapping For the general GEMM-based convolution algorithm, M of matrix multiplications refers to the number of output channels, N refers to the size of output feature mappings and the batch number, and K refers to the filter size and the number of input channels. Overall, M, N, and K are less than 1000, and even M in half of the cases is less than 100 <cit.>. In <Ref>, the parameters of matrix multiplications become smaller, where N is only the batch number and K is only the number of input channels. Taking the convolution in inception3a/5x5 of GoogleNet as an example <cit.>, after transforming it to GEMM, M, N, and K are 32, 128, and 16 respectively. For the above case, the FP32 performance of the matrix multiplication is 0.408GFlops on SW26010, which only plays 0.055% of the theoretical peak performance. The plain matrix multiplication mapping based on the whole CG is difficult to perform well when matrix multiplication scale is small. Therefore, we present the concept of the TB. By zoning the CPE cluster by software, we can partition one CG into multiple TB. Each TB works independently while multiple CPEs within one TB work cooperatively, thereby improving the utilization of hardware resources. Eventually, we designed an original convolution algorithm, MG3MConv, toward the SW2010 processor. When the matrix multiplication scale is small, forcing 8x8 mapping will result in a single CPE gaining too small tasks and poor performance <cit.>. Accordingly, the simple convolution algorithm based on the above scheme is also inefficient. <Ref> can solve the problem well. MG3MConv distinguishes MM_unit into three different scales shown in <Ref>, in order of small-scale MM_unit, medium-scale MM_unit, and large-scale MM_unit, corresponding to different grained TBs, which are TB(1,1), TB(1,8), and TB(8,8) separately. For TB(1,1), a single MM_unit is mapped to one CPE, and the CPE cluster can perform 64 independent tasks simultaneously. Similarly, TB(1,8) maps a single MM_unit to a row of CPEs, performing 8 independent tasks in parallel. TB(8,8) maps a single MM_unit to the whole CG, similar to matrix multiplication algorithms on SW26010. However, the design ideas of matrix multiplication exploit the convolutional potential on SW26010 insufficiently, so we will further introduce other optimization methods in this paper. We can convert one task of TB(1,8) to multiple tasks of TB(1,1) by tiling MM_unit. Similarly, one task of TB(8,8) can be converted to multiple tasks of TB(1,8). The specific tiling method can refer to <cit.>, and we will not repeat it. Except for the above three division schemes of TB, there are others such as TB(1,2), TB(2,2), and TB(2,4). However, this paper aims to propose and verify the feasibility of the above idea, so we will only discuss and implement MG3MConv based on TB(1,1), TB(1,8), and TB(8,8). §.§ TB-level Optimization TB-level optimization focuses on task collaboration among multiple CPEs within a single TB. §.§.§ Multi-mode on-chip data sharing SW26010 provides a low-latency on-chip register communication mechanism. For TBs with more than one CPE, designing algorithms to increase on-chip data reuse within every TB can significantly reduce the pressure on memory access. Therefore, we propose two different modes of on-chip data sharing strategies for TB(1,8) and TB(8,8): single-broadcast on-chip data sharing and dual-broadcast on-chip data sharing. The dimensions of 𝐅𝐋𝐓_mtx, 𝐈𝐍_mtx, and 𝐎𝐔𝐓_mtx matrices in the MG3MConv are IC× OC, IC× B, and OC× B, respectively. For TB(1,8) shown in <Ref>, one MM_unit is mapped to a row of CPEs, and 𝐎𝐔𝐓_mtx is divided into 8 equal parts along the OC dimension. Each CPE is responsible for one OC8× B submatrix of 𝐎𝐔𝐓_mtx, which requires one IC×OC8 submatrix of 𝐅𝐋𝐓_mtx and one IC× B submatrix of 𝐈𝐍_mtx. At this time, 𝐈𝐍_mtx is repeatedly loaded 8 times, resulting in the high cost of memory access. We propose a single-broadcast on-chip data sharing to solve the problem. Further, divide 𝐈𝐍_mtx into 8 equal parts along the IC dimension, and then perform the row broadcast of submatrices of 𝐈𝐍_mtx in turn to finish the following computation: 𝐎𝐔𝐓_mtx[ i ] =∑_k=0^7𝐅𝐋𝐓_mtx[ i,k ] ×𝐈𝐍_mtx[ k ] As illustrated in <Ref>, CPE[ i ] represents the i-th CPE in one row, corresponding to 𝐅𝐋𝐓_mtx[ i ], 𝐈𝐍_mtx[ i ], and 𝐎𝐔𝐓_mtx[ i ], submatrices ( i∈[ 0,1,...,7 ] ). Furthermore, 𝐅𝐋𝐓_mtx[ i ] is divided into 8 equal parts labeled as 𝐅𝐋𝐓_mtx[ i,0 ] ∼𝐅𝐋𝐓_mtx[ i,7 ]. Firstly, CPE[ 0 ] broadcasts 𝐈𝐍_mtx[ 0 ] to the other CPEs in the same row, which receive the row-broadcast data by register communication. The 8 CPEs in the same row perform 𝐎𝐔𝐓_mtx[ i ] +=𝐅𝐋𝐓_mtx[ i,0 ] ×𝐈𝐍_mtx[ 0 ] separately. Similarly, We can finish the remaining operations from CPE[ 1 ] to CPE[ 7 ]. For TB(8,8), the CPE Cluster processes one MM_unit at a time. Intuitively, we partition 𝐎𝐔𝐓_mtx by an 8×8 mesh. Each CPE is responsible for one OC8×B8 submatrix of 𝐎𝐔𝐓_mtx, which requires one IC×OC8 submatrix of 𝐅𝐋𝐓_mtx and one IC×B8 submatrix of 𝐈𝐍_mtx. Both 𝐅𝐋𝐓_mtx and 𝐈𝐍_mtx are loaded 8 times repeatedly. We propose a double-broadcast on-chip data sharing to eliminate repeated memory access. Further, we partition 𝐅𝐋𝐓_mtx and 𝐈𝐍_mtx by an 8×8 mesh, then perform the row broadcast of submatrices of 𝐅𝐋𝐓_mtx and the column broadcast of submatrices of 𝐈𝐍_mtx. The specific computational process is as follows: 𝐎𝐔𝐓_mtx[ i,j ] =∑_k=0^7𝐅𝐋𝐓_mtx[ k,i ] ×𝐈𝐍_mtx[ k,j ] Similarly, CPE[ i,j ] represents the CPE in the i-th row and j-th column ( i,j∈[ 0,1,...,7 ] ) shown in <Ref>, corresponding to 𝐅𝐋𝐓_mtx[ j,i ], 𝐈𝐍_mtx[ i,j ], and 𝐎𝐔𝐓_mtx[ i,j ]. 𝐅𝐋𝐓_mtx[ j,i ] is mapped to CPE[ i,j ] by inverting the row and column indexes. The purpose is to avoid idling the row receiving buffer on the CPE and promote the efficiency of register communication. Firstly, CPE[ i,0 ] broadcasts 𝐅𝐋𝐓_mtx[ 0,i ] to the other CPEs in the same row, and CPE[ 0,j ] broadcasts 𝐈𝐍_mtx[ 0,j ] to the other CPEs in the same column. At the moment, all the CPEs perform 𝐎𝐔𝐓_mtx[ i,j ] +=𝐅𝐋𝐓_mtx[ 0,i ] ×𝐈𝐍_mtx[ 0,j ]. Similarly, We can finish the remaining operations from CPE[ i,1 ] ,CPE[ 1,j ] to CPE[ i,7 ] ,CPE[ j,7 ]. §.§ Thread-level Optimization Thread-level optimization concentrates on designing the optimization methods of data access within a single CPE. Referring to <cit.>, MG3MConv performs all DMA operations by single-precision data while performing the assembly kernel by double-precision data. Therefore, the additional occupation of LDM caused by data type conversion becomes a non-negligible problem. §.§.§ Enhanced data reuse within the CPE The filter will be used repeatedly during the convolution execution because of the convolutional weight sharing in CNNs. In <Ref>, each 𝐅𝐋𝐓_matrix with a size of IC× OC is used about outH× outW time. The stride size is generally smaller than the filter size, so the input will also be used repeatedly. Similarly, <Ref> will use each 𝐈𝐍_matrix with a size of IC× B about fltHstdH×fltWstdW times repeatedly. As shown in <Ref>, each 𝐅𝐋𝐓_matrix is loaded 36 times, while each 𝐈𝐍_matrix is only loaded 9 times. Because outH× outW is usually larger than fltHstdH×fltWstdW in real-world CNNs, we focus more on optimizing the data access of 𝐅𝐋𝐓_matrix used more frequently. By exploring the data reuse of 𝐅𝐋𝐓_matrix, we present <Ref> to reduce or even eliminate repeated data access cost for 𝐅𝐋𝐓_matrix. In <Ref>, we allocate the LDM space, 𝐥𝐝𝐦𝐎𝐔𝐓_S[ outLen ], for outLen 𝐎𝐔𝐓_matrix. Similarly, 𝐥𝐝𝐦𝐈𝐍_S and 𝐥𝐝𝐦𝐅𝐋𝐓_S are for one 𝐈𝐍_matrix and one 𝐅𝐋𝐓_matrix. Given the on-chip data type conversion, we deploy 𝐥𝐝𝐦𝐎𝐔𝐓_D[ outLen ], 𝐥𝐝𝐦𝐈𝐍_D, and 𝐥𝐝𝐦𝐅𝐋𝐓_D for the double-precision data used by the assembly kernel. The computation of 𝐥𝐝𝐦𝐎𝐔𝐓_D[ outLen ] in the innermost loop can realize the outLen-times data reuse of 𝐥𝐝𝐦𝐅𝐋𝐓_S. Correspondingly, the total amount of transferring 𝐅𝐋𝐓_matrix from the main memory to the LDM is reduced by outLen times. <Ref> shows that the frequency of loading the same 𝐅𝐋𝐓_matrix drops to 12 with outLen=3. Without considering the LDM capacity, we can set outLen by an extreme value of outH× outW. At this time, MG3MConv will eliminate repeated data access of 𝐅𝐋𝐓_matrix. §.§.§ Enhanced asynchronous DMA within the CPE SW26010 supports, by DMA, asynchronous data access between the main memory and the LDM, making it possible to apportion the cost of data access into the assembly kernel. Therefore, we employ a double buffering method shown as <Ref> to hide DMA's data access cost in MG3MConv. We double buffer 𝐅𝐋𝐓_mtx based on 𝐥𝐝𝐦𝐅𝐋𝐓_S[ ldst ] and 𝐥𝐝𝐦𝐅𝐋𝐓_S[ cmpt ], where ldst indicates the LDM space of data required the next computation of the assembly kernel, and cmpt is for the current computation of the assembly kernel. Similarly, we use 𝐥𝐝𝐦𝐈𝐍_S[ ldst ] and 𝐥𝐝𝐦𝐈𝐍_S[ cmpt ] to double buffer 𝐈𝐍_mtx. Because of the data type conversion in MG3MConv, set the corresponding 𝐥𝐝𝐦𝐅𝐋𝐓_D[ ldst ], 𝐥𝐝𝐦𝐅𝐋𝐓_D[ cmpt ], 𝐥𝐝𝐦𝐈𝐍_D[ ldst ], and 𝐥𝐝𝐦𝐈𝐍_D[ cmpt ] for the double-precision data of the assembly kernel in <Ref>, respectively. With the above preparations, we prefetch 𝐥𝐝𝐦𝐅𝐋𝐓_S[ cmpt ] and 𝐥𝐝𝐦𝐈𝐍_S[ cmpt ], and guarantee to load 𝐥𝐝𝐦𝐅𝐋𝐓_S[ ldst ] and 𝐥𝐝𝐦𝐈𝐍_S[ ldst ] and to compute the assembly kernel are executed in parallel without data dependence. The essence of double buffering is to overlap independent computation and data access in the program, thereby hiding the shorter cost of both. If both costs are significantly unbalanced, immoderate double buffering will waste limited on-chip storage resources and hurt performance. To improve the effect of double buffering and save the LDM, we design four double buffering methods based on <Ref>: (1) zero-matrix double buffering; (2) one-matrix double buffering of 𝐈𝐍_mtx or 𝐅𝐋𝐓_mtx; (3) two-matrices double buffering of 𝐈𝐍_mtx and 𝐅𝐋𝐓_mtx; (4) three-matrices double buffering of 𝐈𝐍_mtx, 𝐅𝐋𝐓_mtx, and 𝐎𝐔𝐓_mtx. §.§.§ Enhanced LDM usage with the CPE In enhanced asynchronous DMA within the CPE, there are two types of LDM data: ordinary data and double-buffering data. Because of the data type conversion in MG3MConv, DMA operations depend on SPD (single-precision data), and the assembly kernel depends on DPD (double-precision data). The left of <Ref> shows the simple LDM usage, where each SPD matches a double-sized DPD in pairs. Compared with the ideal LDM usage in the right of <Ref>, we can see that the simple usage will cause additional LDM consumption in 2 times, which is unacceptable for limited LDM with only 64KB on one CPE. As shown in the middle of <Ref>, we propose a nested usage for ordinary data and a fixed usage for double-buffering data to solve the above problem. The nested usage places the LDM space of SPD in the first half of the corresponding DPD, which realizes the physical share and logical separation of LDM between the SPD and the DPD. At the time, we need to guarantee the result accuracy of the algorithm. For the SPD loaded by DMA, follow the end of the DMA loading closely and convert the SPD into the DPD in reverse order. For the SPD stored by DMA, follow the beginning of the DMA storing closely and convert the DPD into the SPD in sequential order. The fixed usage specifies the SPD as the LDM space indexed by ldst and the corresponding DPD as the LDM space indexed by cmpt. To guarantee the result accuracy of the algorithm, for the SPD loaded by DMA, the conversion from the SPD to the DPD follows the end of the DMA loading closely. At this moment, the freed SPD space prepares for the next DMA loading. Similarly, the conversion from the DPD to the SPD follows the beginning of the DMA storing closely. Then, the freed DPD space prepares for the next computation of the assembly kernel. As shown in the middle of <Ref>, the enhanced usage only requires about 66.7% LDM extra compared with the ideal usage, which significantly relieves the pressure of limited LDM. §.§ Instruction-level Optimization Instruction-level optimization mainly addresses the highly optimized implementation of the assembly kernel in MG3MConv. Although it is similar to the work in <cit.>, there still exist two differences: (1) FLT_mtx requires data transposition; (2) the values of B, IC, and OC are small. §.§.§ Register computation without data transposition We must firstly solve two problems with the high-performance implementation of the assembly kernel. How to effectively organize and map scalar computation to vector computation? How to allocate limited vector registers? Ordinary multiply-add operations, such as 𝐂[ 0:N ] +=𝐀[ 0:N ] ×𝐁[ 0:N ], can be directly converted into vector operations by segmentation based on vector length. However, the assembly kernel of MG3MConv is complicated, as shown in <Ref>. 𝐥𝐝𝐦𝐎𝐔𝐓_D[ k,n ] +=∑_c=0^C-1𝐥𝐝𝐦𝐅𝐋𝐓_D[ c,k ] ×𝐥𝐝𝐦𝐈𝐍_D[ c,n ] k∈[ 0,K ) ,n∈[ 0,N ) The direct vectorization method is not suitable for the assembly kernel. We can also change the dimension of 𝐥𝐝𝐦𝐅𝐋𝐓_D from C× K to K× C by data transposition and then directly use the work in <cit.>. However, we prefer to avoid the cost of data transposition, so we design a vectorization mapping in <Ref>. Taking MG3MConv based on TB(1,1), the details are as follows: * The vldd instruction loads four elements of 𝐥𝐝𝐦𝐎𝐔𝐓_D in turn because the vector length of SW26010 is four. We mark the result as the vector array 𝐥𝐝𝐦𝐎𝐔𝐓_D^V with a size of K×N4. * The ldde instruction loads one element of 𝐥𝐝𝐦𝐅𝐋𝐓_D in turn and performs vector expansion. We mark the result as the vector array 𝐥𝐝𝐦𝐅𝐋𝐓_D^V with a size of C× K. * The vldd instruction loads four elements of 𝐥𝐝𝐦𝐈𝐍_D in turn. We mark the result as the vector array 𝐥𝐝𝐦𝐈𝐍_D^V with a size of C×N4. * The vmad instruction performs C times of multiply-add operations based on 𝐥𝐝𝐦𝐎𝐔𝐓_D^V, 𝐥𝐝𝐦𝐅𝐋𝐓_D^V, and 𝐥𝐝𝐦𝐈𝐍_D^V. * The vstd instruction stores the final values back to the original positions of 𝐥𝐝𝐦𝐎𝐔𝐓_D. Each CPE of SW26010 has 32 vector registers, including the zero register and the SP (stack pointer) register. We can only use no more than 30 vector registers freely. As shown in <Ref>, we assume that one stage of the assembly kernel is responsible for one 𝐥𝐝𝐦𝐎𝐔𝐓_D^V block of size K_r×N_r4, which requires one 𝐥𝐝𝐦𝐅𝐋𝐓_D^V block of size C_r× K_r and one 𝐥𝐝𝐦𝐈𝐍_D^V block of size C_r×N_r4. To guarantee the efficiency of the vectorization computation, we make the following limits: (1) there is no data dependence within each stage; (2) there is no register reuse within each stage. Therefore, C_r=1 can satisfy the limit (1). In addition, we have K_r+N_r4+K_rN_r4<30 because of the limit (2). To maximize the computation to data access ratio in <Ref>, we acquire the minimum value of 4N_r+1K_r when K_r=N_r4=4. 2KNC/4KCNN_r+CNKK_r+2KN≈2/4N_r+1K_r §.§.§ Fine-grained instruction reordering Each CPE of SW26010 has two pipelines, P0 and P1. The P0 mainly supports floating-point operations, and the P1 mainly supports data transfers. Meanwhile, both of the two can run integer scalar operations. According to the conclusions from Section 4.4.1, we can acquire the ideal allocation of vector registers. We load 𝐥𝐝𝐦𝐅𝐋𝐓_D^V with four vector registers marked as 𝐅𝐋𝐓_r[ 0 ] ∼𝐅𝐋𝐓_r[ 3 ], load 𝐥𝐝𝐦𝐈𝐍_D^V with four vector registers marked as 𝐈𝐍_r[ 0 ] ∼𝐈𝐍_r[ 3 ], and store the computational results of 𝐥𝐝𝐦𝐎𝐔𝐓_D^V with 16 vector registers marked 𝐎𝐔𝐓_r[ 0,0 ] ∼𝐎𝐔𝐓_r[ 3,3 ]. Taking MG3MConv based on TB(1,1), the left of <Ref> shows the elementary instruction sequence of the innermost loop of the assembly kernel. The parallelism of the instruction sequence is so low that the execution cost is up to 25 cycles. Many excellent studies <cit.> have proved the importance and effectiveness of manual instruction reordering. Therefore, we realize the highly effective instruction-level parallelism by manually reordering the instruction sequence. Before entering the innermost loop, prefetch 𝐅𝐋𝐓_r[ 0 ] ∼𝐅𝐋𝐓_r[ 3 ] and 𝐈𝐍_r[ 0 ] ∼𝐈𝐍_r[ 3 ] required by the first computation, and then rearrange the instruction sequence with two fundamental principles. Principle 1 guarantees to parallel the front of computation and the rear of data access in the current loop. Principle 2 guarantees to parallel the rear of computation in the current loop and the front of data access in the next loop. As shown in the right of <Ref>, the optimized instruction sequence only requires 17 cycles, and the performance is improved by about 47.1%. Although the rearranged instruction sequence significantly improves performance, the limit of K_r=4 and N_r=16 can not be ignored. K_r=4 requires K to be a multiple of 4, and N_r=16 requires N to be a multiple of 16. If the above conditions are not satisfied, we have to pad data to run the assembly kernel correctly, which will enormously impair the performance. For example, the correct execution will cause 16.4% of extra computation and 12.5% of extra data access when K=30, N=44, and C=16. We take multiple possible cases into account based on the values of K_r and N_r to solve the problem. Given 16 cases consisting of K_r∈{ 1,2,3,4 } and N_r∈{ 4,8,12,16 }, we rearrange 16 kinds of instruction sequences. <Ref> shows the case of K_r=2 and N_r=12. Based on the above implementations, we can divide the assembly kernel into four parts: (1) k∈[ 0,K-mod( K,4 ) ) and n∈[ 0,N-mod( N,16 ) ); (2) k∈[ 0,K-mod( K,4 ) ) and n∈[ N-mod( N,16 ) ,N ); (3) k∈[ K-mod( K,4 ) ,K ) and n∈[ 0,N-mod( N,16 ) ); (4) k∈[ K-mod( K,4 ) ,K ) and n∈[ N-mod( N,16 ) ,N ). At this time, the assembly kernel can be performed without any extra cost when K=30, N=44, and C=16. For MG3MConv based on TB(1,8), we can implement its instruction-level optimization by mainly replacing vldd of 𝐥𝐝𝐦𝐈𝐍_D with vldr. Moreover, MG3MConv based on TB(8,8) mainly uses vldc and ldder instead of vldd of 𝐥𝐝𝐦𝐈𝐍_D and ldde of 𝐥𝐝𝐦𝐅𝐋𝐓_D, respectively. Because TB(1,8) and TB(8,8) will introduce additional instructions to compute the addresses of data broadcasted, we only refer to the rearranged instruction sequences of TB(1,1) and then design the rest of the 32 cases. § EXPERIMENTAL RESULTS To verify the work of this paper synthetically, we evaluate the superiority of the proposed MG3MConv algorithm from three aspects. We first evaluate the algorithm’s adaptability with different convolution scenes. Then, test the performance of several representative CNNs to verify the practicability of MG3MConv. Lastly, we demonstrate the superiority of the multi-grained mapping scheme of MG3MConv. This paper chooses the NVIDIA K80m GPU in the same period to compare the runtime performance of cuDNN. The theoretical peak performance of FP of K80m GPU is 8.74TFlops. Considering different theoretical peak performances of SW26010 and K80m GPU, we use hardware efficiency (%) in experiments instead of the general performance metric (GFlops). The computation of hardware efficiency is runtime performance/theoretical peak performance, which indicates the utilization degree of processors during the convolution execution. We can intuitively spot the superiority of convolution algorithms of different hardware platforms according to hardware efficiency. §.§ Evaluating the Adaptability Current CNNs have a variety of convolution layers, and convolution parameters change with convolution layers irregularly. Therefore, it is unnecessary to try to cover all possible convolution scenes. This paper generates four sets of experiments targeting different values of convolution parameters: (1) channel number (IC,OC), (2) batch number (B), (3) filter size (fltH,fltW), (4) padding size (padH,padW) and stride size (stdH,stdW). §.§.§ Convolution scenes with different channel numbers We generate three sets of convolutions corresponding to three channel scales: small-scale, medium-scale, and big-scale channels. For small-scale channel convolutions, the ranges of channel number are 16, 32, 48, and 64; for medium-scale channel convolutions, the ranges of channel number are 64, 128, 192, and 256; for big-scale channel convolutions, the ranges of channel number are 256, 512, 768, and 1024. Each set contains 16 convolution scenes based on IC and OC. <Ref> shows the hardware efficiency of MG3MConv and cuDNN for convolution scenes with various channel numbers. We can find that MG3MConv outperforms cuDNN in 97.8% of the scenes, and the average hardware efficiency is 1.77 times that of cuDNN. Comparing <Ref>, <Ref>, and <Ref>, we can draw an important conclusion that the larger the channel number is, the better the performance is. This mainly owes to that MM_unit based on B, IC, and OC is the core of MG3MConv. When B is determined, larger IC and OC can efficiently improve the performance of matrix multiplications. The scenes of the big-scale channel have the best performance, where the hardware efficiency of MG3MConv can reach 84.78% in max, while that of cuDNN is only 48.36%. However, the performance of convolution declines as the channel number decreases. The average hardware efficiency of MG3MConv is 55.86% on the scenes of the medium-scale channel, which is 1.49 times that of cuDNN. For the scenes of the small-scale channel, the hardware efficiency of MG3MConv drops to 36.87% on average but is still significantly more than that of cuDNN. §.§.§ Convolution scenes with different batch numbers B is not as unpredictable as IC and OC in CNNs, so we set B= 64, 128, and 256 as representatives. Then, the three values of B are matched with IC=OC in Section 5.1.1 to test convolution scenes with different batch numbers. <Ref> shows the hardware efficiency of MG3MConv for convolution scenes with the three representative batch numbers. We have two important observations from the results in <Ref>. Firstly, the larger B is, the higher the performance of MG3MConv is. The average hardware efficiency is 40.39%, 59.07%, and 62.54% by the value of B from smallest to largest, respectively. This owns that larger B means higher DMA bandwidth and better instruction-level parallelism. Secondly, the performance gap of different B is narrowing as the channel number increases. This is because larger IC can improve the instruction-level parallelism, while larger OC can improve the DMA bandwidth. Therefore, B will influence the performance of MG3MConv less as IC and OC increase. We can find that increasing B is beneficial but is not endless. §.§.§ Convolution scenes with different filter sizes Generally, the filter size is odd and no more than 11, so we select fltH=fltW= 3, 5, 7, 9, and 11. Similarly, the five values of fltH=fltW are matched with IC=OC in Section 5.1.1 to finish experiments. <Ref> shows the hardware efficiency of MG3MConv for convolution scenes with different filter sizes. We acquire two important observations from these results. Firstly, the filter size has an inconspicuous effect on the performance of MG3MConv when the other convolution parameters are determined. The average performance fluctuation is only 1.65%. Secondly, the performance will slightly increase for small channel numbers as the filter size becomes bigger. However, the performance maintains highly stable when the channel size is more than 256. This is because the impact of data access gradually decreases for MG3MConv with the increase of the channel number. Larger filter sizes mean better optimization of data locality. When MG3MConv is compute-bound, the performance is determined by B, IC, and OC while is not affected by filter sizes. §.§.§ Convolution scenes with different padding sizes and stride sizes Padding and stride are the most neglected parameters in convolution, but they are important to the convergence of CNNs. Except for two kinds of common configurations: (1) padH=padW=0 and stdH=stdW=1 and (2) padH=padW=1 and stdH=stdW=1, we add two additional configurations: (3) padH=padW=0 and stdH=stdW=2 and (4) padH=padW=1 and stdH=stdW=2. Eventually, we match the four configurations with IC=OC in Section 5.1.1 to experimentalize. <Ref> shows the hardware efficiency of MG3MConv for convolution scenes with various padding sizes and stride sizes. With these results of <Ref> together, we can find that the performance of MG3MConv is almost stable when only padding and stride sizes change, and the performance fluctuation is only 0.65% on average. There are two main reasons to cause performance fluctuation: (1) a padding size more than 0 will lead to a more complex execution process of MG3MConv; (2) a bigger stride size will lessen the potential data locality of the algorithm. Even so, we can still consider that MG3MConv has excellent adaptability to padding and stride sizes. §.§ Evaluating the Practicability To verify the practicability of MG3MConv in the real world, we select six representative CNNs as experiment objects: AlexNet, VGG, GoogLeNet, ResNet, SqueezeNet, and YOLO. We test and record the hardware efficiency of MG3MConv based on all convolution layers of the six CNNs, and then compare that of cuDNN. <Ref> shows the hardware efficiency of MG3MConv and cuDNN for different CNNs. Overall, MG3MConv outperforms cuDNN in all the six CNNs. Compared with cuDNN, the improvement of the hardware efficiency of MG3MConv ranges from 2.4% to 77.9%, and is up to 43.85% on average. As shown in <Ref>, the hardware efficiency of MG3MConv on VGG is the highest with 67.04%, and has 37.21% and 96.61% improvement compared with that of cuDNN and swDNN <cit.>. In summary, <Ref> demonstrates that MG3MConv has better practicability than cuDNN and swDNN. §.§ Evaluating the Multi-grained Mapping Scheme The core ideology of the MG3MConv algorithm proposed in this paper is the multi-grained mapping scheme. The scheme is directly affected by B, IC, and OC. Referring to Section 5.1, we select various convolution scenes to verify the superiority of the multi-grained mapping scheme. These convolution scenes are artificially built from the three representative batch numbers and different channel numbers ranging from 16 to 1024. <Ref> shows the best-grained mapping scheme of MG3MConv for different convolution scenarios. The X-axis indicates the value of IC, the Y-axis indicates the value of OC, and the yellow, green, green, and purple squares represent TB(1,1), TB(1,8), and TB(8,8), respectively. We have two important observations from the results in <Ref>. Firstly, when B is fixed, the granularity of the mapping scheme increases as IC and OC increase. Secondly, TB(1,8) and TB(8,8) tend to extend to the upper left corner of <Ref> as B increases. This mainly owes to that MG3MConv takes MM_unit as convolution tasks of one TB. A large-grained mapping scheme will cause a lack of workload in a single CPE for small B, IC, and OC. Conversely, for big B, IC, and OC, a small-grained mapping scheme will lead to repeated data access between the main memory and the LDM. Therefore, partitioning one CG into multiple thread blocks is beneficial when B, IC, and OC are small. We manually produce a simple convolution algorithm based on TB(8,8) to verify the superiority of MG3MConv. As shown in <Ref>, for the coverage area of TB(1,1) plus TB(1,8), B=64, B=128, and B=256 are 100%, 68%, and 60%, respectively. We can see that MG3MConv improves performance on most convolution scenes compared with the simple convolution. <Ref> shows the average hardware efficiency of the simple convolution and MG3MConv. Comparing the results of both, MG3MConv brings significant performance improvement with 102.24%, 44.92%, and 26.97% at B=64, B=128, and B=256, respectively. In summary, <Ref> and <Ref> demonstrate that the multi-grained mapping scheme of MG3MConv is necessary and can improve the performance of convolution on SW26010 significantly. § CONCLUSIONS The current support of convolution on SW26010 is still rudimentary. There are mainly two urgent problems: (1) enhance the adaptability for various convolution scenes; (2) deploy the mature implementation of single-precision convolution. This paper presents a novel convolution algorithm, MG3MConv, to solve these problems. Based on the concept of TB proposed in this paper, MG3MConv can perform diversified mapping schemes of convolution tasks, which significantly improves the adaptability to different convolution scenes. Compared with cuDNN and swDNN, experiments demonstrate that the proposed MG3MConv performs better for various convolution scenes and real-world CNNs. Because of the features of the SW26010 architecture, we design architecture-specific optimization techniques, such as LDM utilization and register communication. Generally speaking, some optimization techniques, such as thread blocking, vectorization, and instruction reordering, are also applied to other many-core processors, such as the Intel Xeon/Xeon Phi and the NVIDIA GPUs. In summary, our work can be general for other application and algorithm optimization problems on SW26010, which also provides other many-core processors with some valuable references. Our future work is on other convolution algorithms, such as Winograd-based convolution. Moreover, We expect to extend the experience of convolution algorithms on SW26010 to other many-core processor platforms. § ACKNOWLEDGMENTS The work is supported by the National Key Research and Development Program of China under Grant (2018YFB0204102). We sincerely thank the technical staff professionals of Sunway TaihuLight for helpful discussions.
http://arxiv.org/abs/2307.04795v1
20230710180006
Multi-fractional instantons in $SU(N)$ Yang-Mills theory on the twisted $\mathbb T^4$
[ "Mohamed M. Anber", "Erich Poppitz" ]
hep-th
[ "hep-th", "hep-lat", "hep-ph" ]
=1 A
http://arxiv.org/abs/2307.04604v1
20230710144332
EchoVest: Real-Time Sound Classification and Depth Perception Expressed through Transcutaneous Electrical Nerve Stimulation
[ "Jesse Choe", "Siddhant Sood", "Ryan Park" ]
cs.SD
[ "cs.SD", "cs.LG", "eess.AS", "eess.SP" ]
An effective density matrix approach for intersubband plasmons coupled to a cavity field: electrical extraction/injection of intersubband polaritons R. Colombelli August 12, 2023 ==================================================================================================================================================== Over 1.5 billion people worldwide live with hearing impairment [17]. Despite various technologies that have been created for individuals with such disabilities, most of these technologies are either extremely expensive or inaccessible for everyday use in low-medium income countries. In order to combat this issue, we have developed a new assistive device, EchoVest, for blind/deaf people to intuitively become more aware of their environment. EchoVest transmits vibrations to the user’s body by utilizing transcutaneous electric nerve stimulation (TENS) based on the source of the sounds. EchoVest also provides various features, including sound localization, sound classification, noise reduction, and depth perception. We aimed to outperform CNN-based machine-learning models, the most commonly used machine learning model for classification tasks, in accuracy and computational costs. To do so, we developed and employed a novel audio pipeline that adapts the Audio Spectrogram Transformer (AST) model, an attention-based model, for our sound classification purposes, and Fast Fourier Transforms for noise reduction. The application of Otsu’s Method helped us find the optimal thresholds for background noise sound filtering and gave us much greater accuracy. In order to calculate direction and depth accurately, we applied Complex Time Difference of Arrival algorithms and SOTA localization. Our last improvement was to use blind source separation to make our algorithms applicable to multiple microphone inputs. The final algorithm achieved state-of-the-art results on numerous checkpoints, including a 95.7% accuracy on the ESC-50 dataset for environmental sound classification. § INTRODUCTION According to the World Health Organization, if a person's hearing thresholds are below 20 dB, they are said to have hearing loss [17]. The consequences of this condition can vary in their severity and may include difficulties in communication, leading to social isolation for older individuals, decreased academic performance in children, and limited job opportunities for adults in areas without adequate accommodations for those with hearing loss. Currently, 1.5 billion people globally, or one in every five people, live with hearing loss and this is projected to increase to 2.5 billion people, or 25%, of the world population by 2050. The majority of individuals with hearing loss, 80%, live in low and middle-income countries. Despite this, a significant amount of hearing loss goes unaddressed, costing governments around the world nearly $980 billion annually, with the majority of these costs incurred in low and middle-income countries. This high cost is attributed to the expensive nature of hearing impairment devices such as hearing aids and cochlear implants, which range in cost from $2,000 to $7,000 for hearing aids [16] and $30,000 to $50,000 for cochlear implants [10]. We aim to address the problem of unaddressed hearing impairment by creating EchoVest, a cost-effective wearable alternative to current hearing impairment solutions. EchoVest utilizes sound localization and depth perception through the use of TENS pads and sound classification through Audio Spectrogram Transformers (ASTs). To implement sound localization and depth perception, EchoVest selectively activates TENS pads [14] with amplified signals based on the distance and location of the sound source. With a total cost of manufacturing of $98.90, EchoVest is a significantly more affordable and effective option than existing hearing impairment technologies. § MATERIALS The primary objective was to create an inexpensive, durable, and wearable device for the user’s day to day activities. We used a mesh vest as the base with wiring interwoven throughout the mesh. A mesh vest is ideal for contact between skin and the output nodes. On the back of the mesh, we employed a Raspberry Pi 3B+ as our central computer in order to control all of the output devices. The Machine-Learning libraries and other built-in software that we integrated into our algorithms all required a 64-bit system and the Raspberry Pi 3B+ was the cheapest processor on the market that we were able to obtain. In order to record sounds, we used a ReSpeaker 4-mic Array due to the fact that we could get 4 different streams of audio input. The continuous stream of 4 mics made it possible for us to calculate the direction and distance of sounds. The last piece of significant hardware that we used were TENS electrodes. TENS is a service that delivers mild electrical currents through electrodes placed on the body. By applying TENS to our vest, we are able to directly stimulate the user’s nerves and leave them with a multi-dimensional feeling. A full materials list breakdown with all other assorted materials can be seen in the Figure 1 below. Our essential pieces of hardware and their application can be seen in Figure 2. § METHODS We designed EchoVest to determine the relative location and distance of a sound source from each microphone in our vest for audio spatial awareness. We were able to triangulate the relative location of the sound source in real time and determine the sound's arrival angle using the Open embeddeD Auditory System's (ODAS) built-in sound localization algorithms [4]. We utilized Time Difference of Arrival (TDoA) with Generalized Cross-Correlation with Phase Transforms (GCC-PHAT) to calculate the distance from the sound source to the microphones in real time. After recording the audio signals coming from each microphone, we calculated the cross-correlation function by sliding one signal in relation to the other for each time step to see how similar their waveforms are to one another. The time delay between the signals, or TDoA value, between the two microphones is represented by the time step with the highest cross-correlation value [8]. We were able to select the appropriate pad based on the angle of sound arrival and alter the strength of our electrical pad signals in response to the distance from the microphone, calculated using the distance-rate-time equation. In order to enhance EchoVest's sound localization and depth perception, we utilized a Blind Source Separation (BSS) approach that combined Principal Component Analysis (PCA), Non-Negative Matrix Factorization (NMF), and Independent Component Analysis (ICA) to separate the combined sound file from the microphone array into individual sound files for each microphone. We first reduced the dimensionality of the sound input with PCA and NMF. NMF factorized the sound input into two smaller matrices, as seen in Figure 3, with non-negative elements [15], while PCA transformed the sound data into a new coordinate system with axes along the directions of maximum variance [7]. ICA then separated the mixed sound signal into subcomponents by assuming that only one component was Gaussian and that the components were independent from each other [4]. The cross-correlation matrix and TDOA values from sound localization allowed us to identify each sound with its corresponding microphone, resulting in enhanced sound localization and depth perception. We employed a signal processing approach that combined Fast Fourier Transforms (FFTs) with Otsu's method to effectively remove background noise from our sound input and enhance the sound classification accuracy. First, we converted the sound input into its frequency domain using FFT, an efficient algorithm that computes the discrete Fourier transform in real-time. We then implemented Otsu's Method, a commonly used image processing algorithm, to denoise the sound input by selecting an optimal noise threshold. Otsu's Method classifies pixels into background and foreground based on their intensity levels and was applied to the audio by converting it into a frequency histogram from the Fourier transform [2]. This effectively filtered out white noise from the input signals and improved the sound classification accuracy. Figures comparing the sound before and after applying Fast Fourier Transform (FFT) with Otsu's Method are presented below in Figure 4. We implemented a Sound Spectrogram Transformer that was trained on the ESC-50 dataset, a dataset for Environmental Sound Classification which consists of 2000 environmental audio recordings suitable for benchmarking methods of environmental sound classification, using 5-fold cross-validation to prevent overfitting. Additionally, this dataset was used in order to limit the amount of semantical classes to only classify sounds frequently associated with real-time environments. As shown in Figure 5, this transformer takes the audio waveform input of T seconds, outputted from Otsu’s Method, and is converted into a 128x100t spectrogram using log Mel filterbank features computed with a 25ms Hamming window every 10ms. The model outputs a Transformer encoder's [CLS] token, which serves as the audio spectrogram representation for classification. The corresponding label is matched by using a linear layer with sigmoid activation. To assess the accuracy of the classification data produced by the transformer model in a real-time environment, the resultant semantic class was paired with a timestamp associated with the live sound data. This live sound data was simulated by playing a variety of sounds around the mic array with white noise that included people talking, laughter, and the air conditioner running. The aligned time series data were then used to calculate errors and determine accuracy of the real-time classification system. The sample capture rate for the audio input was 0.205 seconds. The only constraint for the electrical system was that we needed to provide an equal amount of current and voltage to each of the output nodes located on the vest. The intensity of the electrical current expelled from the TENS electrodes is directly related to the current from the Raspberry Pi, which outputs a maximum current of 50 hertz through the 5V pin-outs. By applying two strategic parallel circuits, as demonstrated in Figure 6, we were able to ensure that each output node only revised a maximum current of 12.5 HZ. 12.5 HZ is under-powered for the typical TENS electrode (50 Hertz), the current is still high enough to be felt and also guarantees user safety. The process for setting up the OS and driver was challenging due to many libraries being outdated and the Re-Speaker 4 Mic Array Drivers being meant for a 32-bit version of Raspberry Pi. We downloaded specific versions of packages and downgraded our Raspberry Pi OS to 64-bit Raspberry Pi OS Bullseye 11.0 Debian Release. Lastly, the Re-Speaker 4-Mic Array Drivers was written for a 32-bit system with Linux Kernel Version 4.9.80+ while the earliest release of Raspberry Pi 64-bit OS had 5.10.40+ Kernels. We addressed this issue by modifying the ReSpeaker Driver Scripts and creating overlay diversions for every component of the driver. § RESULTS Referencing Figure 7, Prototype 1 had a simple design which served as a basic setup on a breadboard, comprising the ReSpeaker, LEDs, and the Raspberry Pi. This prototype allowed us to test our ML algorithms and hardware, which enabled accurate node activations and the assessment of our model's accuracy in the presence of live background noise. The next stage of prototyping involved integrating our breadboard design with the mesh vest, to test its practical functionality and identify any issues with our original mesh design. Prototype 3 was focused on making the vest portable by incorporating a battery pack, streamlining the wiring, and correcting node positioning. Additionally, the position of the ReSpeaker was adjusted to eliminate significant vest feedback that was observed in the previous prototype. As a result, the device is now easy to use, simply by putting on the vest and switching on the battery pack. We tested the device with LEDS by having a test subject put on the vest and had a person playing sounds on a phone from 2 meters away. By visually being able to see the changes in luminosity of the LEDS, we could confirm that our depth and direction algorithms were working. As shown in Table 1, the implementation of Fast Fourier Transforms with Otsu's Method outperformed three other noise reduction techniques with a peak signal-to-noise ratio (PSNR) of 57.5 dB. FFT with Otsu's Method is an uncommon technique for noise reduction, but it proved to be more effective than each of the other algorithms due to its higher PSNR value, which indicates the algorithm's ability to reduce noise. Referencing Table 2, our Audio Spectrogram Transformer model outperformed many traditional sound classification models in terms of accuracy, with higher accuracies on the ESC-50 dataset and higher mean average precisions (mAPs) on the AudioSet dataset. Specifically, the Audio Spectrogram Transformer achieved an accuracy of 95.7% on the ESC-50 dataset and a 0.485 mAP on the AudioSet. Also, we conducted a small test of the electrical stimulation of the TENS electrodes on the vest. We had 10 different human volunteers (55-65 years old) report the amount of stimulation that they received on a scale from 0-10 (with 0 being no electrical stimulation and 10 being the stimulation with the maximum current output) when all the TENS were activated. This process was then repeated in 3 different environments. § DISCUSSION Our implementation of the Fast Fourier Transform (FFT) algorithm using Otsu's Method for noise thresholding effectively removed white noise from our audio input, which preserved the model’s sound classification accuracy of 95.7%. This confirmed that our preprocessing pipeline was efficient in accurate sound classification in real-time environments. This implementation was much more effective than other methods, since the peak signal-to-noise ratio (PSNR) was higher than other methods without requiring machine learning, which allowed our algorithm to process under the limited computational powers. We tested the different noise reduction algorithms in Table 1 by calculating the PSNR (using the PSNR formula) of the original and denoised sounds for each of the four different noise reduction algorithms we considered. As shown in Table 3, the TENS stimulation is largely unaffected by changes in temperature and is largely unaffected by environmental factors. Due to a lack of professional expertise and knowledge of wiring and Raspberry Pi current control, we could not guarantee the safety of the user due to the possible rapid changes in current. Thus, we replaced the TENS with LEDs, which were the closest alternative to the TENS as they could describe the distance and depth through visual aid with the brightness of the LED indicating distance and the specific placement of the LED representing direction. As a result, EchoVest serves as a proof-of-concept that the various components and pipeline accomplish the task. Currently, we are creating an app that hosts the classification model and sound preprocessing on the cloud because of potential for customizable features. However, EchoVest will still be completely localized and will not require the app to function. The app serves as a means to increase functionality with wifi and bluetooth capabilities by giving the user the ability to not be notified of certain sounds and change the various strengths of the TENS to the person's suitability. Furthermore, our future goal includes a partnership with companies like Ring that would allow us to utilize and implement our sound classification and preprocessing pipeline to their system. Their systems are outdated and do not take out sufficient white noise and, therefore, limit the functionality of their doorbell system. In addition to company partnerships, EchoVest can be directly applied to search and rescue operations as it gives personnel a heightened sense of their surroundings. EchoVest has multiple functionalities, which makes EchoVest a multi-purpose solution to a variety of real world problems. § CONCLUSION In this paper, we developed an accessible, cost-efficient wearable product for localizing and classifying sounds. We further demonstrate that our preprocessing pipeline, consisting of Otsu’s Method and FFT, sufficiently preserves sound classification accuracy in a real-time environment. EchoVest utilizes optimized machine learning to efficiently and effectively lower computation costs, thereby reducing product costs. EchoVest costs a maximum of $98 per unit to manufacture and is easily accessible to the general public due to the lack of customization needed. Traditional hearing aid devices require numerous prerequisites, such as hearing test, medical clearance, and hearing aid evaluation. Constrastingly, EchoVest will be available to the public without any specialization or pre-requisites because its only variable would be the size of the vest. 17 bisgaard2021 Bisgaard, N., Zimmer, S., Laureyns, M., & Groth, J. (2021). A model for estimating hearing aid coverage world-wide using historical data on hearing aid sales. International Journal of Audiology, 61(10), 841–849. <https://doi.org/10.1080/14992027.2021.1962551> chen2012 Chen, H., & Gururajan, R. (2012). Otsu’s Threshold Selection Method Applied in De-noising Heart Sound of the Digital Stethoscope Record. Lecture Notes in Electrical Engineering, 239–244. <https://doi.org/10.1007/978-3-642-26001-8_31> ast Gong, Y., Chung, Y.-A., & Glass, J. (2021). AST: Audio Spectrogram Transformer. ArXiv:2104.01778 <https://arxiv.org/abs/2104.01778> sloc Grondin, F., & Michaud, F. (2019). Lightweight and optimized sound source localization and tracking methods for open and closed microphone array configurations. Robotics and Autonomous Systems, 113, 63–80. <https://doi.org/10.1016/j.robot.2019.01.002> fdahear Health, C. for D. and R. (2019, April 24). Hearing Aids. FDA. <https://www.fda.gov/medical-devices/consumer-products/hearing-aids> ica Hyvärinen, A., & Oja, E. (2000). Independent Component Analysis: Algorithms and Applications. Neural Networks, 13(45), 411–430. <https://www.cs.helsinki.fi/u/ahyvarin/papers/NN00new.pdf> pca Jolliffe, I. T., & Cadima, J. (2016). Principal component analysis: a review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065), 20150202. <https://doi.org/10.1098/rsta.2015.0202> gccphat Kwon, B., Park, Y., & Park, Y. (2010, October 1). Analysis of the GCC-PHAT technique for multiple sources. IEEE Xplore. <https://doi.org/10.1109/ICCAS.2010.5670137> hearlos McKee, M. M., Choi, H., Wilson, S., DeJonckheere, M. J., Zazove, P., & Levy, H. (2018). Determinants of Hearing Aid Use Among Older Americans With Hearing Loss. The Gerontologist. <https://doi.org/10.1093/geront/gny051> cochlear National Institute on Deafness and other Communication Disorders. (2018, June 15). Cochlear Implants. NIDCD. <https://www.nidcd.nih.gov/health/cochlear-implants> procon Nunez, K. (2020, February 27). Cochlear Implant: Cost, Pros, Cons, Risks, How It Works. Healthline. <https://www.healthline.com/health/cochlear-implant> ppwc Papers with Code - ESC-50 Benchmark (Audio Classification). (n.d.). Paperswithcode.com. Retrieved February 2, 2023, from <https://paperswithcode.com/sota/audio-classification-on-esc-50> speechrec Rev, & Rev. (2021, May 17). Exploring Your Speech-to-text Options: Advantages and Disadvantages of Speech Recognition Software. Rev. <https://www.rev.com/blog/speech-to-text-technology/advantages-and-disadvantages-of-speech-recognition-software> tens Transcutaneous electrical nerve stimulator (TENS). (2018, April 4). University of Iowa Hospitals & Clinics. <https://uihc.org/health-topics/transcutaneous-electrical-nerve-stimulator-tens> nmf Wang, Y.-X., & Zhang, Y.-J. (2013). Nonnegative Matrix Factorization: A Comprehensive Review. IEEE Transactions on Knowledge and Data Engineering, 25(6), 1336–1353. <https://doi.org/10.1109/tkde.2012.51> cost Watson, A. M. (2022, August 16). How Much Do Hearing Aids Cost? GoodRx. <https://www.goodrx.com/health-topic/ear/hearing-aid-cost> who World Health Organization: WHO. (2019, September 18). Hearing loss. Who.int; World Health Organization: WHO. <https://www.who.int/health-topics/hearing-loss#tab=tab_1‌>
http://arxiv.org/abs/2307.07642v1
20230714221550
Roman Early-Definition Astrophysics Survey Opportunity: Galactic Roman Infrared Plane Survey (GRIPS)
[ "Roberta Paladini", "Catherine Zucker", "Robert Benjamin", "David Nataf", "Dante Minniti", "Gail Zasowski", "Joshua Peek", "Sean Carey", "Lori Allen", "Javier Alonso-Garcia", "Joao Alves", "Friederich Anders", "Evangelie Athanassoula", "Timothy C. Beers", "Jonathan Bird", "Joss Bland-Hwathorn", "Anthony Brown", "Sven Buder", "Luca Casagrande", "Andrew Casey", "Santi Cassisi", "Marcio Catelan", "Ranga-Ram Chary", "Andre-Nicolas Chene", "David Ciardi", "Fernando Comeron", "Roger Cohen", "Thomas Dame", "Ronald Drimmel", "Jose Fernandez Trincado", "Douglas Finkbeiner", "Douglas Geisler", "Mario Gennaro", "Alyssa Goodman", "Gregory Green", "Gergely Hajdu", "Calen Henderson", "Joseph Hora", "Valentin D. Ivanov", "Davy Kirkpatrick", "Chiaki Kobayashi", "Michael Kuhn", "Andres Kunder", "Jessica Lu", "Philip W. Lucas", "Daniel Majaess", "S. Thomas Megeath", "Aaron Meisner", "Sergio Molinari", "Przemek Mroz", "Meliss Ness", "Nadine Neumayer", "Francisco Nogueras-Lara", "Alberto Noriega-Crespo", "Radek Poleski", "Hans-Walter Rix", "Luisa Rebull", "Henrique Reggiani", "Marina Rejkuba", "Roberto K. Saito", "Ralph Schoenrich", "Andrew Saydjari", "Eugenio Schisano", "Edward Schlafly", "Keving Schlaufman", "Leigh Smith", "Joshua Speagle", "Dan Wisz", "Rosemary Wyse", "Nadia Zakamska" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.IM" ]
Roman Early-Definition Astrophysics Survey Opportunity: Galactic Roman Infrared Plane Survey (GRIPS) White Paper Submitted on October 22 2021 Authors: Roberta Paladini (Caltech-IPAC), Catherine Zucker (STScI), Robert Benjamin (Wisconsin), David Nataf (JHU), Dante Minniti (Univ Andres Bello), Gail Zasowski (Univ of Utah), Joshua Peek (STScI), Sean Carey (Caltech-IPAC), Lori Allen (NOIRLab), Javier Alonso-Garcia (Univ Antofagasta), Joao Alves (Univ of Vienna), Friederich Anders (UNiv of Barcelona), Evangelie Athanassoula (LAM), Timothy C. Beers (Univ of Notre Dame), Jonathan Bird (Vanderbilt Univ), Joss Bland-Hwathorn (Univ of Sydney), Anthony Brown (Univ of Leiden), Sven Buder (ANU), Luca Casagrande (ANU), Andrew Casey (Monash Univ), Santi Cassisi (INAF), Marcio Catelan (PUC), Ranga-Ram Chary (Caltech-IPAC), Andre-Nicolas Chene (Gemini Obs), David Ciardi (Caltech-IPAC), Fernando Comeron (ESO), Roger Cohen (STScI), Thomas Dame (SAO), Ronald Drimmel (INAF), Jose Fernandez Trincado (UCN), Douglas Finkbeiner (Harvard Univ), Douglas Geisler (Univ de Concepcion), Mario Gennaro (STScI), Alyssa Goodman (Harvard Univ), Gregory Green (MPIA), Gergely Hajdu (CAMK), Calen Henderson (Caltech-IPAC), Joseph Hora (CfA), Valentin D. Ivanov (ESO), Davy Kirkpatrick (Caltech-IPAC), Chiaki Kobayashi (UNiv of Hertfordshire), Michael Kuhn (Univ of Hertfordshire), Andres Kunder (Saint Martin's Univ), Jessica Lu (UC Berkeley), Philip W. Lucas (Univ of Hertfordshire), Daniel Majaess (MSVU), S. Thomas Megeath (Univ of Toledo), Aaron Meisner (NOIRLab), Sergio Molinari (INAF), Przemek Mroz (Warsaw Univ), Meliss Ness (Columbia Univ), Nadine Neumayer (MPIA), Francisco Nogueras-Lara (MPIA), Alberto Noriega-Crespo (STScI), Radek Poleski (Warsaw Univ), Hans-Walter Rix (MPIA), Luisa Rebull (Caltech-IPAC), Henrique Reggiani (Carnegie Obs), Marina Rejkuba (ESO), Roberto K. Saito (UFSC), Ralph Schoenrich (UNiv College London), Andrew Saydjari (Harvard Univ), Eugenio Schisano (INAF), Edward Schlafly (STScI), Keving Schlaufman (JHU), Leigh Smith (Cambridge Univ), Joshua Speagle (Univ Toronto), Dan Wisz (UC Berkeley), Rosemary Wyse (JHU), Nadia Zakamska (JHU) Do you support the selection of a Roman Early-Definition Astrophysics Survey? Yes. A wide-field near-infrared survey of the Galactic disk and bulge/bar(s) is supported by a large representation of the community of Galactic astronomers. The combination of sensitivity, angular resolution and large field of view make Roman uniquely able to study the crowded and highly extincted lines of sight in the Galactic plane. A ∼ 1000 deg^2 survey of the bulge and inner Galactic disk would yield an impressive dataset of ∼120 billion sources and map the structure of our Galaxy. The effort would foster subsequent expansions in numerous dimensions (spatial, depth, wavelengths, epochs).Importantly, the survey would benefit from early defintion by the community, namely because the Galactic disk is a complex environment, and different science goals will require trade offs. Science Investigation: The Milky Way is the only large galaxy where individual stars can be resolved down to the central few parsecs. Existing large-scale photometric, spectroscopic, and astrometric surveys have fostered a rich understanding of the Galaxy as a spatially, chemically, and kinematically complex structure, with ample evidence of interactions, past mergers, and secular evolution and substructure in star formation. Recent abundance of large-scale ground-based spectroscopic surveys, and measurements of parallaxes and proper motions by the Gaia mission have super-charged these investigations. A wide area Galactic survey with Roman will characterize most of the stellar content of our Galaxy and will provide unique information on both the history of galaxy formation, and the on-going process of star formation in vastly different environments, as Roman is uniquely suited to deal with the confusion and extinction prevalent in the plan of the Galaxy (see Fig. 1). A Galactic Plane survey was one of five programs specifically endorsed by the Science Definition Team (SDT) in the WFIRST Interim Report (Green et al. 2012). Importanty, the Nancy Grace Roman Space Telescope significantly exceeds the capability of the existing efforts in three critical areas: (a) astrometric precision, (b) survey sensitivity, (c) maping speed. This opens new avenues in studies of stellar astrophysics, star formation and Galactic structure, e.g. Appendix D of the WFIRST-AFTA report (2015) and Stauffer et al. (2018). The high angular resolution of Roman will enable studies of previously unresolved stellar populations (see Fig. 2). That includes globular clusters in the Galactic plane and bulge, stellar clusters in star forming regions, and the entire nuclear region of the Galaxy. The Nobel-prize winning study of stellar motions near Sgr A, the HST Galactic center study (200 mas, 0.1 deg^2, Dong et al. 2011) and the ESO VLT GALACTICNUCLEUS survey (220 mas, 0.3 deg^2, Nogueras-Lara et al. 2019) are all pertinent examples highlightining the relevance of such a dataset. The sensitivity of Roman will provide the deepest infrared Galactic plane survey by at least two magnitudes (see Fig. 3 and Fig. 4). Red clumps and YSOs can be surveyed out to a greater volume of the disk allowing the rewriting of Galactic structure, particularly the spiral arms and the central Galaxy where source confusion has blocked progress. The greater depth will likewise enable studies of the stellar initial mass function down to lower mass limits in sites across the Galaxy, and provide significantly more “background" sources for the construction of 3D dust maps. The combination of depth and angular resolution wil also yiel a novel/unique catalog of galaxies and galaxy clusters beyond the Galactic disk. The mapping speed of Roman will allow for a significant graction of the stars in the Galaxy to be covered in a uniform way, a crucial requiremnt for studies of Galactic structure. Finally, a Roman single-pass survey of the Galactic Plane early in the mission would enable subsequent passes later on, largely surpassing what can be obtained by simply combining Roman and, e.g., 2MASS, with a 25-year baseline, therefore bolstering the characterization of stellar proper motions in regions inaccessible to Gaia, notably in the complex orbital structure of the Galactic bar(s) and nucleus. This will produce new insights on the “inside-out" evolution and central luminous/dark matter distribution of the Galaxy, and enable proper motion selection of populations, e.g., HST SWEEPS survey. GRIPS (Galactic Roman Infrared Plane Survey) will also allow for synergies with shorter wavelength monitoring of the Galactic Plane by the Vera C. Rubin Observatory, with the spectral information obtained by SPHEREx and SDSS-V Milky Way Mapper, and with the proposed (Hobbs et al. 2016) Gaia-NIT mission. Possible Observational Outline and Preparatory Activities: To maximize the impact of Roman's high angular resolution in the Galaxy's most crowded fields, we propose a 991 deg^2 survey of the inner Galactic Plane, spanning latitudes |b| < 3^2 over the longitude range |l| < 60^2 with additional latitude coverage up to |b| < 10^2 in the bulge (|l| < 10 ^2). We will leverage the Wide Field Instrument in three filters: F106, F158 and F213. The F106 filter was chosen to provide continuous wavelength coverage with Rubin at shorter wavelengths, and the F213 filter was selected to maximize the potential of Roman in dust-enshrouded regions deep in the plane. F158 will complement the other two filters and allow building diagnostics for the identification of the surveyed stellar populations. Importantly, follow-up time in subsequent years will allow additional astrometric and proper motion measurements over a sizable temporal baeline. We propose an integration time of 55 seconds per filter, reaching a minimum depth of 25.5 mag in F106, 25.3 mag in F158, and 24.7 mag in F213. We plan for one primary dither in each filter to fill the gaps in the detectors and account for cosmic rays - totally 21.4 sec- and two secondary sub-pixel dithers only in the F213 band to obtain accurate astrometry for determining the proper motions, requiring 10 sec each for slewing and settling. This yields two exposures each at F106 and F158, and six exposures in F213. We propose small FOV-type slews (of 0.4^∘), which will add 50 sec of overhead for each field. With a 3 sec readout time between exposures when not slewing, this setup will require 673 sec of time per 0.281 deg^2 field, and includes exposure time, slewing time, readout time, and time for our primary (gap-fill) and secondary (sub-pixel) dithering strategies. We will need approximately 3600 pointings for our 991 deg^2 survey area, yielding and estimated total time of 673 hours. By extrapolating to our proposed footprint the Penny et al. (2019) stellar density estimates based on the Besancon model, we estimate that up to 120 billion unique stellar sources shall be characterized, as compared to the 0.38 billion sources in Gaia eDR3. Optimization of Survey Design via Community Inputs: We anticipate that the ultimate design of GRIPS wil require substantial community input to optimize the undertaking, as highlighted by the breadth of expertise in the list of co-authors. The above survey design is intended to serve as a baseline, with trade-offs in the dithering strategy, coverage area, and number/choice of filters having substantial implications for the different science cases described in Section 4. Defining the survey early will allow us to build not only the most powerful survey to address these different science cases on its own, but also a well-crafted initial design to enable optimal expansions in epochs, spatial coverage, wavelength coverage by subsequent guest investigators. A poorly-considered initial design will foreclose some of these options. Development of A Crowded-Fields Forced Photometry Pipeline: Creating an expandable high precision astro-/photometric catalog – with ∼ 100 times more sources than Gaia – in regions of spatially complex diffuse background will require significant preparatory work. Combining with next generation ground-based surveys like Rubin and existing longer wavelength surveys will allow us to significantly expand the native Roman photometric coverage, broadening the scientific reach of the survey. The development of PSF forced photometry pipelines able to operate in heavily crowded fields is required given the resolution of ground-based observations. This activity, which we will carry out during the early definition period using the combined HST/DECam dataset towards the Galactic center, will complement effort such as the Joint Survey Processing (Chary et al. 2020), which is focused on lower source-density regions. alpha Chary et al., 2020, Joint Survey Processing of Euclid, Roman and Rubin: Final Report, astro-ph/2008.10663 Dong et al., 2011, MNRAS, 417, 114 Green et al. 2012, Wide-Field Infrared Survey Telescope (WFIRST): Final Report, astro-ph/1208.4012 Hobbs et al., 2016, Gaia NIR: Combining optical and Near-Infrared (NIR) capabilities with Time-Delay Integration (TDI) sensors for a future Gaia-like mission, astro-ph/1609.07325 Minniti et al., 2010, NewAstr, 15, 433 Nogueras-Lara et al., 2019, A&A, 631, 20 Penny et al., 2019, ApJS, 241, 3 Spergel et al., 2015, Wide-Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets, WFIRST-AFTA 2015 Report Stauffer et al., 2018, The Science Advantage of a Redder Filter for WFIRST, astro-ph/1806.00554 Figures
http://arxiv.org/abs/2307.05062v1
20230711071039
System of Spheres-based Two Level Credibility-limited Revisions
[ "Marco Garapa", "Eduardo Ferme", "Maurício D. L. Reis" ]
cs.LO
[ "cs.LO", "cs.AI", "I.2.3 Deduction and Theorem Proving (F.4.1)" ]
Belief Revision from Probability Jeremy Goodman School of Philosophy University of Southern California, USA [email protected] Bernhard Salow Faculty of Philosophy University of Oxford, UK [email protected] August 12, 2023 ======================================================================================================================================================================================================= Two level credibility-limited revision is a non-prioritized revision operation. When revising by a two level credibility-limited revision, two levels of credibility and one level of incredibility are considered. When revising by a sentence at the highest level of credibility, the operator behaves as a standard revision, if the sentence is at the second level of credibility, then the outcome of the revision process coincides with a standard contraction by the negation of that sentence. If the sentence is not credible, then the original belief set remains unchanged. In this paper, we propose a construction for two level credibility-limited revision operators based on Grove's systems of spheres and present an axiomatic characterization for these operators. § INTRODUCTION Belief Change (also called Belief Revision) is an area that studies the dynamics of belief. One of the main goals underlying this area is to model how a rational agent updates her set of beliefs when confronted with new information. The main model of belief change is the AGM model <cit.>. In that model, each belief of an agent is represented by a sentence and the belief state of an agent is represented by a logically closed set of (belief-representing) sentences. These sets are called belief sets. A change consists in adding or removing a specific sentence from a belief set to obtain a new belief set. The AGM model considers three kinds of belief change operators, namely expansion, contraction and revision. An expansion occurs when new information is added to the set of the beliefs of an agent. The expansion of a belief set K by a sentence α (denoted by K+α) is the logical closure of K∪{α}. A contraction occurs when information is removed from the set of beliefs of an agent. A revision occurs when new information is added to the set of the beliefs of an agent while retaining consistency if the new information is itself consistent. From the three operations, expansion is the only one that can be univocally defined. The other two operations are characterized by a set of postulates that determine the behaviour of each one of these functions, establishing conditions or constrains that they must satisfy. Although the AGM model has acquired the status of standard model of belief change, several researchers (for an overview see <cit.>) have pointed out its inadequateness in several contexts and proposed several extensions and generalizations to that framework. One of the criticisms to the AGM model that appears in the belief change literature is the total acceptance of the new information, which is characterized by the success postulate for revision. “The AGM model always accepts the new information. This feature appears, in general, to be unrealistic, since rational agents, when confronted with information that contradicts previous beliefs, often reject it altogether or accept only parts of it" (<cit.>). This may happen for various reasons. For example, the new information may lack on credibility or it may contradict previous highly entrenched beliefs. Models in which the belief change operators considered do not satisfy the success postulate are designated by non-prioritized belief change operators (<cit.>). The output of a non-prioritized revision may not contain the new belief that has motivated that revision. Two level credibility-limited revision operators (two level CL revision operators for short) are non-prioritized revision operators that were proposed (independently) in <cit.> and <cit.>. When revising by means of a two level CL revision operator two levels of credibility and one level of incredibility are considered. When revising by a sentence at the highest level of credibility, the operator behaves as a standard revision. In this case the new information is incorporated in the agent's belief set. If the sentence is at the second level of credibility, then the outcome of the revision process coincides with a standard contraction by the negation of that sentence. In this case, the new information is not accepted but all the beliefs that are inconsistent with it are removed. The intuition underlying this behaviour is that, the belief is not credible enough to be incorporated in the agent's belief set, but creates some doubt in the agent's mind making her remove all the beliefs that are inconsistent with it. In this paper, we propose a construction for two level CL revision operators based on Grove's systems of spheres and present an axiomatic characterization for these operators. The rest of the paper is organized as follows: In Section set_back2 we introduce the notations and recall the main background concepts and results that will be needed throughout this article. In Section sectTLCL13 we present the two level CL revision operators and an axiomatic characterization for a class of these operators. In Section set_SS4 we propose a construction for two level CL revision operators based on Grove's systems of spheres and present an axiomatic characterization for these operators. In Section setrelated5, we present a brief survey of related works. In Section set_conc6, we summarize the main contributions of the paper. § BACKGROUND §.§ Formal Preliminaries We will assume a propositional language that contains the usual truth functional connectives: ¬ (negation), (conjunction), (disjunction), → (implication) and ↔ (equivalence). We will also use to denote the set of all formulas of the language. We shall make use of a consequence operation Cn that takes sets of sentences to sets of sentences and which satisfies the standard Tarskian properties, namely inclusion, monotony and iteration. Furthermore, we will assume that Cn satisfies supraclassicality, compactness and deduction. We will sometimes use Cn(α) for Cn({α}), A ⊢α for α∈ Cn(A), ⊢α for α∈ Cn(∅), A ⊬α for α∉Cn(A), ⊬α for α∉Cn(∅). The letters α, β, … will be used to denote sentences of . A, B, … shall denote sets of sentences of . K is reserved to represent a set of sentences that is closed under logical consequence (i.e. K = Cn(K)) — such a set is called a belief set or theory. Given a belief set K we will denote Cn(K∪{α}) by K+α. We will use the symbol ⊤ to represent an arbitrary tautology and the symbol ⊥ to represent an arbitrary contradiction. A possible world is a maximal consistent subset of . The set of all possible worlds will be denoted by . Sets of possible worlds are called propositions. The set of possible worlds that contain R⊆ is denoted by R, i.e., R={M∈:R ⊆ M}. If R is inconsistent, then R=∅. The elements of R are designated by R-worlds. For any sentence α, α is an abbreviation of Cn({α}) and its elements are designated by α-worlds. §.§ AGM Revisions The operation of revision of a belief set consists of the incorporation of new beliefs in that set. In a revision process, some previous beliefs may be retracted in order to obtain, as output, a consistent belief set. The following postulates, which were originally presented in <cit.>, are commonly known as AGM postulates for revision:[These postulates were previously presented in <cit.> but with slightly different formulations.] 1 Kα = Cn (Kα) (i.e. Kα is a belief set).Closure 2 α∈Kα.Success 3 Kα⊆K+α.Inclusion 4 If α∉K, then K+α⊆Kα.Vacuity 5 If α is consistent, then Kα is consistent.Consistency 6 If ⊢α↔β, then Kα = Kβ.Extensionality 7 Kα∩Kβ⊆K (α∨β).Disjunctive overlap 8 If α∉K(α∨β), then K (α∨β) ⊆Kα.Disjunctive inclusion An operator for a belief set K is a basic AGM revision if and only if it satisfies postulates (1) to (6). It is an AGM revision if and only if it satisfies postulates (1) to (8). §.§ AGM Contractions A contraction of a belief set occurs when some beliefs are removed from it (and no new beliefs are added). The following postulates, which were presented in <cit.> (following <cit.>), are commonly known as AGM postulates for contraction: ÷1 K÷α = Cn (K÷α) (i.e. K÷α is a belief set).Closure ÷2 K÷α⊆K.Inclusion ÷3 If α∉K, then K⊆K÷α.Vacuity ÷4 If ⊬α, then α∉K÷α.Success ÷5 K⊆ (K÷α) + α.Recovery ÷6 If ⊢α↔β, then K÷α = K÷β.Extensionality ÷7 K÷α∩K÷β⊆K÷(α∧β).Conjunctive overlap ÷8 K÷(α∧β)⊆K÷α whenever α∉K÷(α∧β).Conjunctive inclusion An operator ÷ for a belief set K is a basic AGM contraction if and only if it satisfies postulates (÷1) to (÷6). It is an AGM contraction if and only if it satisfies postulates (÷1) to (÷8). There are several contraction operators that are exactly characterized by the postulates (÷1) to (÷8), namely the (transitively relational) partial meet contractions <cit.>, safe contraction <cit.>, system of spheres-based contraction <cit.> and epistemic entrenchment-based contraction <cit.>. The Levi and Harper identities[Harper identity: <cit.> K÷α=(Kα)∩K. Levi identity: <cit.> Kα=(K÷α)+α.] make contraction and revision interchangeable. These identities allow us to define the revision and the contraction operators in terms of each other. The Levi (respectively Harper) identity enable the use of contraction (resp. revision) as primitive function and treat revision (resp. contraction) as defined in terms of contraction (resp. revision). §.§ Sphere-based Operations of Belief Change Grove (<cit.>), inspired by the semantics for counterfactuals (<cit.>) proposed a structure called system of spheres to be used for defining revision functions. Figuratively, the distance between a possible world and the innermost sphere reflects its plausibility towards K. The closer a possible world is to K, the more plausible it is. Let K be a belief set. A system of spheres, or spheres' system, centred on K is a collection 𝕊 of subsets of , i.e., 𝕊⊆𝒫(), that satisfies the following conditions: 𝕊 1 𝕊 is totally ordered with respect to set inclusion; that is, if U, V∈𝕊, then U⊆ V or V ⊆ U. 𝕊 2 K∈𝕊, and if U∈𝕊, then K⊆ U (K is the ⊆-minimum of 𝕊). 𝕊 3 ∈𝕊 ( is the largest element of 𝕊). 𝕊 4 For every α∈, if there is any element in 𝕊 intersecting α then there is also a smallest element in 𝕊 intersecting α. The elements of 𝕊 are called spheres. For any consistent sentence α∈, the smallest sphere in 𝕊 intersecting α is denoted by 𝕊_α. Given a system of spheres 𝕊 centered on K it is possible to define expansion, revision and contraction operators based on 𝕊. Let K be a belief set. (a) An operation + on K is a system of spheres-based expansion operator if and only if there exists a system of spheres 𝕊 centered on K such that for all α it holds that: K+α = ⋂(K∩α). (b) An operation ÷ on K is a system of spheres-based contraction operator if and only if there exists a system of spheres 𝕊 centered on K such that for all α it holds that: K÷α = {[ ⋂((S_α∩α) ∪K) if α≠∅; K otherwise; ]. (c) An operation on K is a system of spheres-based revision operator if and only if there exists a system of spheres 𝕊 centered on K such that for all α it holds that: Kα = {[ ⋂(S_α∩α) if α≠∅; otherwise; ]. It holds that sphere-based revision and contraction operators are characterized, by the (eight) AGM postulates for revision and contraction, respectively (<cit.>). § TWO LEVEL CREDIBILITY-LIMITED REVISIONS The two level CL revisions are operators of non-prioritized revision. When revising a belief set by a sentence α, we first need to analyse the degree of credibility of that sentence. When revising by a sentence that is considered to be at the highest level of credibility, the operator works as a standard revision operator. If it is considered to be at the second level of credibility, then that sentence is not incorporated in the revision process but its negation is removed from the original belief set. When revising by a non-credible sentence, the operator leaves the original belief set unchanged. The following definition formalizes this concept: Let K be a belief set, be a basic AGM revision operator on K and C_H and C_L be subsets of . Then ⊙ is a two level CL revision operator induced by , C_H and C_L if and only if: K⊙α = {[ Kα if α∈ C_H; (Kα)∩K if α∈ C_L; K if α∉(C_L∪ C_H); ]. In the previous definition C_H∪ C_L represent the sentences that are considered to have some degree of credibility. C_H and C_L represent respectively the set of sentences that are considered to be at the first (highest) and at the second level of credibility. Note that if α∈ C_L, then K⊙α=(Kα)∩K. According to the Harper identity (Kα)∩K coincides with the contraction of K by α. This construction can be further specified by adding constraints to the structure of the set(s) of credible sentences. In <cit.>, the following properties for a given set of credible sentences C were proposed: Credibility of Logical Equivalents: If ⊢α↔β, then α∈ C if and only if β∈ C.[In <cit.> this property was designated by closure under logical equivalence and was formulated as follows: If ⊢α↔β, and α∈ C, then β∈ C.] Single Sentence Closure: If α∈ C, then Cn(α)⊆ C. Element Consistency: If α∈ C, then α⊬⊥. Credibility lower bounding: If K is consistent, then K⊆ C. Additionally, in <cit.> the following condition that relates a set of credible sentences C with a revision function ⋆ was introduced. This condition, designated by condition (<ref>), states that if a sentence α is not credible, then any possible outcome of revising the belief set K through ⋆ by a credible sentence contains α. The intuition underlying this property is that if α is not credible then its negation cannot be removed. Thus its negation should still be in the outcome of the revision by any credible sentence. C - ⋆If α∉C and β∈ C, then α∈K⋆β. §.§ Two level credibility-limited revision postulates We now recall from <cit.> some of the postulates proposed to express properties of the two level CL revision operators. The first postulate was originally proposed in <cit.>, the second in <cit.>, the following three in <cit.> and the remaining ones in <cit.>. Consistency Preservation If K is consistent, then K⊙α is consistent. Confirmation If α∈K, then K⊙α =K. Strict Improvement If α∈K⊙α and ⊢α→β, then β∈K⊙β. Regularity If β∈K⊙α, then β∈K⊙β. Disjunctive Distribution If α∨β∈K⊙ (α∨β), then α∈K⊙α or β∈K⊙β. N-Recovery K⊆K⊙α+α. N-Relative success If α∈K⊙α, then K⊙α=K. N-Persistence If β∈K⊙β, then β∈K⊙α. N-Success Propagation If α∈K⊙α and ⊢β→α, then β∈K⊙β. Weak Relative Success α∈K⊙α or K⊙α⊆K. Weak Vacuity If α∉K, then K⊆K⊙α. Weak Disjunctive Inclusion If α∉K⊙(α∨β), then K⊙ (α∨β)+(α∨β)⊆K⊙α+α. Containment If K is consistent, then K∩ ((K⊙α)+α)⊆K⊙α. The following observations relate some of the postulates presented above. [<cit.>] Let K be a consistent and logically closed set and ⊙ be an operator on K. (a) If ⊙ satisfies closure, consistency preservation, weak relative success and N-Recovery, then it satisfies N-Relative success. (b) If ⊙ satisfies weak vacuity and inclusion, then it satisfies confirmation. Let K be a consistent and logically closed set and ⊙ be an operator on K. (a) If ⊙ satisfies consistency preservation, closure, vacuity, inclusion, strict improvement, disjunctive inclusion, disjunctive overlap and N-recovery, then it satisfies regularity. (b) If ⊙ satisfies consistency preservation, closure, vacuity, weak relative success and disjunctive inclusion, then it satisfies disjunctive distribution. (c) If ⊙ satisfies N-recovery and closure, then it satisfies containment. In the following theorem we recall from <cit.> an axiomatic characterization for a two level CL revision operator induced by an AGM revision and sets C_H and C_L satisfying some given properties.[Actually, the containment postulate was also included in the list of postulates of the representation theorem presented in <cit.>, however as Observation <ref> illustrates, containment follows from closure and N-recovery.] [<cit.>] Let K be a consistent and logically closed set and ⊙ be an operator on K. Then the following conditions are equivalent: 1. ⊙ satisfies weak relative success, closure, inclusion, consistency preservation, weak vacuity, extensionality, strict improvement, N-persistence, N-recovery, disjunctive overlap and weak disjunctive inclusion. 2. ⊙ is a two level CL revision operator induced by an AGM revision operator ⋆ for K and sets C_H, C_L⊆ such that: C_L satisfy credibility of logical equivalents and element consistency, C_H∩ C_L=∅, C_H satisfies element consistency, credibility lower bounding and single sentence closure and condition (C_H∪ C_L - ⋆) holds. § SYSTEM OF SPHERES-BASED TWO LEVEL CREDIBILITY-LIMITED REVISIONS In this section we present the definition of a system of spheres-based two level CL revision operator. We start by presenting the notion of two level system of spheres, centred on K. Let K be a belief set. A two level system of spheres centred on K is a pair (𝕊_i,𝕊) whose elements are subsets of , i.e., 𝕊⊆𝒫() and 𝕊_i ⊆𝒫(), such that: (a) 𝕊 and 𝕊_i satisfy conditions (𝕊 1), (𝕊 2) and (𝕊 4) of Definition <ref>; (b) 𝕊_i ⊆𝕊; (c) If X∈𝕊_i, then X⊆ Y for all Y∈𝕊∖𝕊_i. Intuitively, a two level system of spheres (𝕊_i,𝕊), centered on K is a system composed by two systems of spheres 𝕊_i and 𝕊, both centered on K, where 𝕊_i ⊆𝕊 and in which the condition (𝕊 3) of Definition <ref> is relaxed for 𝕊_i and 𝕊, allowing the existence of possible worlds outside the union of all spheres of 𝕊_i and of 𝕊.[Condition (𝕊 3) of Definition <ref> was also relaxed in <cit.> when constructing a (modified) system of spheres for credibility-limited revision operators.] Conditions (b) and (c) impose that the spheres of 𝕊_i are the innermost ones (see Figure <ref>). The following observation is a direct consequence of condition (c). It states that all spheres contained in a given sphere of 𝕊_i belong to 𝕊_i. If 𝕊_i and 𝕊 satisfy condition (c) of Definition <ref>, then it holds that: If X∈𝕊 and Y∈𝕊_i are such that X⊆ Y, then X∈𝕊_i. In a system of spheres centered on K, the worlds considered most plausible are those that lie in the innermost sphere (i.e. in K), and the closer a possible world is to the center, the more plausible it is considered to be. Similarly, the worlds lying in the spheres of 𝕊_i have a higher degree of plausibility than those in the spheres of 𝕊∖𝕊_i. Intuitively, a two level system of spheres (𝕊_i,𝕊), centered on K defines three clusters. The first cluster is formed by the worlds in the spheres of 𝕊_i. These worlds are the ones to which a higher degree of plausibility is assigned (relatively to those outside the spheres of 𝕊_i). The second cluster is formed by the worlds in the spheres of 𝕊∖𝕊_i, which are assigned some (lower) degree of plausibility. Finally, the third cluster is formed by the worlds outside the spheres of 𝕊, which are considered to be not plausible. We are now in conditions to present the definition of a system of spheres-based two level CL revision operator. The outcome of the revision by means of a system of spheres-based two level CL revision operator of a belief set K by a sentence α (see Figure <ref>) is: - the intersection of the most plausible α-worlds, if these are α-worlds in the cluster of the most plausible worlds.[Note that being X a set of possible worlds ⋂ X is a belief set.] - the intersection of all the worlds contained in the union of the set of K-worlds with the set of the most plausible α-worlds, if the α-worlds are considered to be plausible, but are not in the cluster of the most plausible ones. - K if the α-worlds are not plausible, i.e, in this case the belief set remains unchanged. Let K be a belief set and (𝕊_i,𝕊) be a two level system of spheres centered on K. The system of spheres-based two level CL revision operator induced by (𝕊_i,𝕊) is the operator ⊙_(𝕊_i,𝕊) such that, for all α: K⊙_(𝕊_i,𝕊)α = {[ ⋂ (S_α∩α) if S_α∈𝕊_i; ⋂ (K∪ (S_α∩α)) if S_α∈𝕊∖𝕊_i; K if X∩α=∅, for all X∈𝕊; ]. An operator ⊙ on K is a system of spheres-based two level CL revision operator if and only if there exists a two levels system of spheres (𝕊_i,𝕊) centred on K such that K⊙α=K⊙_(𝕊_i,𝕊)α holds for all α. §.§ Representation theorems We now present a representation theorem for system of spheres-based two level CL revision operators. It also relates these operators with the two level CL revision operators induced by AGM revision operators and sets C_H, C_L⊆ satisfying some given properties. Considering the axiomatic characterization for the latter, presented in Observation <ref>, we note that we only need to ensure that the Condition (C_H - ⋆) holds, to guarantee that the class of these operators coincides with the class of system of spheres-based two level CL revision operators. Let K be a consistent and logically closed set and ⊙ be an operator on K. Then the following conditions are equivalent: 1. ⊙ satisfies weak relative success, closure, inclusion, consistency preservation, vacuity, extensionality, strict improvement, N-persistence, N-recovery, disjunctive overlap and disjunctive inclusion. 2. ⊙ is a system of spheres-based two level CL revision operator. 3. ⊙ is a two level CL revision operator induced by an AGM revision operator ⋆ for K and sets C_H, C_L⊆ such that: C_L satisfy credibility of logical equivalents and element consistency, C_H∩ C_L=∅, C_H satisfies element consistency, credibility lower bounding and single sentence closure and conditions (C_H∪ C_L - ⋆) and (C_H - ⋆) hold. § RELATED WORKS In this section we will mention other approaches related with the present paper. - In <cit.>, the two level CL revision operators were defined in terms of a basic AGM revision operator and sets C_H and C_L of credible sentences. Several properties have been proposed for these sets. Postulates to characterize two level CL revision operators were proposed. Results exposing the relation between the postulates and the properties of C_H and C_L were presented. Axiomatic characterizations for several classes of two level CL revision operators were presented (namely for two level CL revision operators induced by basic AGM revisions and by AGM revisions in which the associated sets of credible sentences satisfy certain properties). - In <cit.>, the operators of two CL revision were introduced in terms of basic AGM belief revisions operators (in that paper these operators are designated by Filtered belief revision). The possibility that an item of information could still be “taken” seriously, even if it is not accepted as being fully credible (this type of information is there called allowable) was discussed. A syntactic analysis of filtered belief revision was provided. - In <cit.>, the works presented in <cit.> and <cit.> were extended by introducing the notion of partial belief revision structure, providing a characterization of filtered belief revision in terms of properties of these structures. There it is considered the notion of rationalizability of a choice structure in terms of a plausibility order and established a correspondence between rationalizability and AGM consistency in terms of the eight AGM postulates for revision. An interpretation of credibility, allowability and rejection of information in terms of the degree of implausibility of the information was provided. - In <cit.> credibility-limited revision operators were presented. When revising a belief set by a sentence by means of a credibility-limited revision operator, we need first to analyse whether that sentence is credible or not. When revising by a credible sentence, the operator works as a basic AGM revision operator, otherwise it leaves the original belief set unchanged. Two level credibility-limited revisions operators can be seen as a generalization of credibility-limited revision operators. In fact, in the case that C_L=∅ both types of operators coincide. In <cit.> several properties were prosed for C (the set of credible sentences) and this model was developed in terms of possible world models. Representations theorems for different classes of Credibility-limited revisions operators were presented. The extension of credibility-limited revision operators to the belief bases setting was studied in <cit.>. § CONCLUSION The model of credibility-limited revision (<cit.>) is essentially a generalization of the AGM framework (<cit.>) of belief revision, which addresses one of the main shortcomings pointed out to that framework, namely the fact that it assumes that any new information has priority over the original beliefs. In the model of credibility-limited revisions two classes of sentences are considered. Some sentences —the so-called credible sentences— are accepted in the process of revision by them, while the remaining sentences are such that the process of revising by them has no effect at all in the original belief set. In its turn, the model of two level CL revision (<cit.>) generalizes credibility-limited revision by considering an additional class of sentences. A sentence of this class is such that, although a revision by it does not lead to its acceptance, it causes the removal of its negation from the original belief set. The present paper offers a semantic approach to the two level CL revision operators. More precisely, it introduces a class of two-level CL revision operators whose definition is based on a structure called two level system of spheres, which generalizes the well-known systems of spheres proposed by Grove (<cit.>). This semantic definition provides some additional insight on the intuition that underlays the notion of two-level CL revisions. Acknowledgements This paper was partially supported by FCT-Fundação para a Ciência e a Tecnologia, Portugal through project PTDC/CCI-COM/4464/2020. M.G. and M.R. were partially supported by the Centro de Investigação em Matemática e Aplicações (CIMA), through the grant UIDB/04674/2020 of FCT. E.F. was partially supported by FCT through project UIDB/04516/2020 (NOVA LINCS). * eptcs 10 AGM85 Carlos Alchourrón, Peter Gärdenfors, and David Makinson. On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50:510–530, 1985. 10.1007/BF00370430 AM85 Carlos Alchourrón and David Makinson. On the logic of theory change: Safe contraction. Studia Logica, 44:405–422, 1985. 10.1007/BF00370430 Bon19 Giacomo Bonanno. Credible information, allowable information and belief revision - extended abstract. In Lawrence S. Moss, editor, Proceedings Seventeenth Conference on Theoretical Aspects of Rationality and Knowledge, TARK 2019, Toulouse, France, 17-19 July 2019, volume 297 of EPTCS, pages 82–90, 2019. 10.4204/eptcs.297.6 Bon22 Giacomo Bonanno. Filtered belief revision: Syntax and semantics. Journal of Logic, Language and Information, 31:645–675, 2022. 10.1007/s10849-022-09374-x FH11 Eduardo Fermé and Sven Ove Hansson. AGM 25 years: Twenty-five years of research in belief change. Journal of Philosophical Logic, 40:295–331, 2011. 10.1007/s10992-011-9171-9 FH18 Eduardo Fermé and Sven Ove Hansson. Belief Change: Introduction and Overview. Springer Briefs in Computer Science Series. Springer, 2018. 10.1007/978-3-319-60535-7 FMT03 Eduardo Fermé, Juan Mikalef, and Jorge Taboada. Credibility-limited functions for belief bases. Journal of Logic and Computation, 13:1:99–110, 2003. 10.1093/logcom/13.1.99 Garapa22b Marco Garapa. Two level credibility-limited revisions. The Review of Symbolic Logic, 15(2):388–408, 2022. 10.1017/S1755020320000283 GFR18a Marco Garapa, Eduardo Fermé, and Maurício Reis. Studies in credibility-limited base revision. In Proceedings of the Sixteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2018), pages 240–247, 2018. <https://aaai.org/papers/28-studies-in-credibility-limited-base-revision/> GFR23 Marco Garapa, Eduardo Fermé, and Maurício D. L. Reis. Levi and Harper identities for non-prioritized belief base change. Artificial Intelligence, 2023. 10.1016/j.artint.2023.103907 GFR20 Marco Garapa, Eduardo Fermé, and Maurício D.L. Reis. Credibility-limited base revision: New classes and their characterizations. Journal of Artificial Intelligence Research, 69:1023 – 1075, 2020. 10.1613/jair.1.12298 Gar78 Peter Gärdenfors. Conditionals and changes of belief. Acta Philosophica Fennica, 30:381–404, 1978. Gar82 Peter Gärdenfors. Rules for rational changes of belief. In Tom Pauli, editor, Philosophical Essays dedicated to Lennart Ȧqvist on his fiftieth birthday, number 34 in Philosophical Studies, pages 88–101, 1982. Gar88 Peter Gärdenfors. Knowledge in Flux: Modeling the Dynamics of Epistemic States. The MIT Press, Cambridge, 1988. GM88 Peter Gärdenfors and David Makinson. Revisions of knowledge systems using epistemic entrenchment. In Moshe Y. Vardi, editor, Proceedings of the Second Conference on Theoretical Aspects of Reasoning About Knowledge, pages 83–95, Los Altos, 1988. Morgan Kaufmann. <http://www.tark.org/proceedings/tark_mar7_88/p83-gardenfors.pdf> Gro88 Adam Grove. Two modellings for theory change. Journal of Philosophical Logic, 17:157–170, 1988. 10.1007/BF00247909 Han99 Sven Ove Hansson. A survey of non-prioritized belief revision. Erkenntnis, 50:413–427, 1999. 10.1023/A:1005534223776 Han99b Sven Ove Hansson. A Textbook of Belief Dynamics. Theory Change and Database Updating. Applied Logic Series. Kluwer Academic Publishers, Dordrecht, 1999. HFCF01 Sven Ove Hansson, Eduardo Fermé, John Cantwell, and Marcelo Falappa. Credibility-limited revision. Journal of Symbolic Logic, 66(4):1581–1596, 2001. 10.2307/2694963 Har76a William L. Harper. Rational conceptual change. PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, 1976:462–494, 1976. 10.1086/psaprocbienmeetp.1976.2.192397 KM91 Hirofumi Katsuno and Alberto Mendelzon. Propositional knowledge base revision and minimal change. Journal of Artificial Intelligence, 52:263–294, 1991. 10.1016/0004-3702(91)90069-V Lev77 Isaac Levi. Subjunctives, dispositions, and chances. Synth̀ese, 34:423–455, 1977. 10.1007/BF00485649 Lew73 David Lewis. Counterfactuals. Blackwell, Oxford, 1973. Mak97b David Makinson. Screened revision. Theoria, 63:14–23, 1997. 10.1111/j.1755-2567.1997.tb00737.x RH14 Hans Rott and Sven Ove Hansson. Safe contraction revisited. In Sven Ove Hansson, editor, David Makinson on Classical Methods for Non-Classical Problems, volume 3 of Outstanding Contributions to Logic, pages 35–70. Springer Netherlands, 2014. 10.1007/978-94-007-7759-0_4 § APPENDIX In this appendix we provide a sketch proof for the main result presented in this paper. (2) to (1): Let ⊙ be a system of spheres-based two level credibility limited revision operator induced by a two levels system of spheres (𝕊_i,𝕊). We need to prove that ⊙ satisfies all the postulates present in statement (1) . (1) to (2): Assume that ⊙ satisfies all the postulates listed in statement (1) and consider the following constructions for 𝕊 and 𝕊_i: S∈𝕊_i iff: (a) S=K; (b) ∅≠S⊆{w:w∈K⊙α, for some α such thatK⊙α⊆α} and K⊙α⊆ S for all α such that S∩α≠∅. S∈𝕊 iff: (a) S=K; (b) ∅≠S⊆{w:w∈K⊙α, for some α such thatK⊙α∩α≠∅}, K⊙α⊆ S for all α such that S∩α≠∅ and if S∩α=∅ and S∉𝕊_i, then K⊙α∩ S=K. We need to show that: 1. (𝕊_i,𝕊) is a two level system of spheres centred on K. To do so, it is necessary to prove that: i. 𝕊 and 𝕊_i satisfy conditions (𝕊 1), (𝕊 2) and (𝕊 4), of Definition <ref>; ii. 𝕊_i ⊆𝕊; iii. If X∈𝕊_i, then X⊆ Y for all Y∈𝕊∖𝕊_i. 2. If α=∅, then K⊙α=K; 3. For α such that K⊙α⊬α and S(α)=⋃{K⊙δ:α⊆δ}, it holds that: i. S(α)∈𝕊 ii. S(α)=S_α (i.e. S(α) is the minimal sphere that intersects with α). iii. K⊙α = {[ ⋂ (S_α∩α) if S_α∈𝕊_i; ⋂ (K∪ (S_α∩α)) if S_α∈𝕊∖𝕊_i; K if X∩α=∅, for all X∈𝕊; ]., where S_α=S(α). (1) to (3): Let ⊙ be an operator satisfying the postulates listed in statement (1). Let ⋆ be the operation such that: i. If α∉K⊙α, then K⋆α=K⊙α+α; ii. If α∈K⊙α, then K⋆α=Cn(α). Furthermore let C_H={α: α∈K⊙α} and C_L={α: α∉K⊙α}∖ C_H. These are the same construction that were used in the corresponding part of Observation <ref>. Then, regarding this proof, it remains only to show that condition (C_H - ⋆) holds. (3) to (1): By Observation <ref> it only remains to prove that ⊙ satisfies vacuity and disjunctive inclusion.
http://arxiv.org/abs/2307.07276v1
20230714111454
The center of the asymptotic Hecke category and unipotent character sheaves
[ "Liam Rogel", "Ulrich Thiel" ]
math.RT
[ "math.RT", "math.CT", "math.QA" ]
In 2015, Lusztig [Bull. Inst. Math. Acad. Sin. (N.S.)10(2015), no.1, 1–72] showed that for a connected reductive group over an algebraic closure of a finite field the associated (geometric) Hecke category admits a truncation in a two-sided Kazhdan–Lusztig cell, making it a categorification of the asymptotic algebra (J-ring), and that the categorical center of this “asymptotic Hecke category” is equivalent to the category of unipotent character sheaves supported in the cell. Subsequently, Lusztig noted that an asymptotic Hecke category can be constructed for any finite Coxeter group using Soergel bimodules. Lusztig conjectured that the centers of these categories are modular tensor categories (which was then proven by Elias and Williamson) and that for non-crystallographic finite Coxeter groups the S-matrices coincide with the Fourier matrices that were constructed in the 1990s by Lusztig, Malle, and Broué–Malle. If the conjecture is true, the centers may be considered as categories of “unipotent character sheaves” for non-crystallographic finite Coxeter groups. In this paper, we show that the conjecture is true for dihedral groups and for some (we cannot resolve all) cells of H_3 and H_4. The key ingredient is the method of H-reduction and the identification of the (reduced) asymptotic Hecke category with known categories whose center is already known as well. We conclude by studying the asymptotic Hecke category and its center for some infinite Coxeter groups with a finite cell. Replay to Remember: Continual Layer-Specific Fine-tuning for German Speech Recognition Theresa Pekarek Rosin Stefan Wermter ^1 York University, Canada ^2 ETH Zürich, Switzerland ^3 RWTH Aachen University, Germany ================================================================================================================================================== empty § INTRODUCTION The representations of finite simple groups are a crucial ingredient in the investigation of finite symmetries. Most finite simple groups arise from finite reductive groups. An example of a reductive group is G SL_n(𝔽_p) for a prime number p; its finite variants are G(𝔽_q) SL_n(𝔽_q) for powers q of p, yielding the finite simple groups _n(𝔽_q). An intrinsic construction produces from a reductive group G a finite group W, the Weyl group, which controls much of the structure of G. The Weyl group of SL_n(𝔽_p), for example, is the symmetric group 𝔖_n. Note that W is independent of p. Deligne–Lusztig theory <cit.> identifies an important subset of irreducible complex representations of finite reductive groups: the “unipotent” ones. The work of Lusztig <cit.> revealed an important feature of these representations: there is a finite set U(W) just depending on W which parametrizes the irreducible unipotent representations of G(𝔽_q) independently of q. Moreover, for each ρ∈ U(W) there is a polynomial Deg(ρ) ∈ℚq such that when q is specialized to q, it gives the degree of the corresponding unipotent representation of G(𝔽_q). Hence, the Weyl group controls the representation theory of the groups G(𝔽_q). Weyl groups admit a special presentation as abstract groups. For example, when taking the transpositions in 𝔖_n as generators, the relations are the Artin braid relations. Coxeter groups are abstract groups admitting a more general such presentation. They still share many properties with Weyl groups. There is an especially well-behaved class of Coxeter groups: the crystallographic groups. Among the finite irreducible Coxeter groups, the crystallographic ones are precisely the Weyl groups; the non-crystallographic ones are the dihedral groups and two further groups denoted H_3 and H_4. The non-crystallographic groups cannot arise from a reductive group like Weyl groups do. Nonetheless, it has been observed that typical tools like the Hecke algebra, which is an algebra of equivariant functions on G(𝔽_q) that again just depends on W, can be defined naturally for any Coxeter group <cit.>. In 1993 (<cit.> with an indication already in <cit.>), Lusztig discovered the same phenomenon for unipotent representations: the sets U(W) and the polynomials Deg(ρ) satisfy some natural properties; these still make sense when W is (finite) non-crystallographic, and there are ad hoc constructions of such data satisfying these properties. Of course, there is no group G(𝔽_q) where this could come from. Later, Lusztig <cit.>, Malle <cit.>, and Broué–Malle <cit.> found ad hoc constructions of Fourier matrices associated to (finite) non-crystallographic Coxeter groups. These are transition matrices between unipotent “almost characters” and unipotent characters for the groups G(𝔽_q) introduced in <cit.>. They are of fundamental importance since the almost characters have a uniform and simple construction. It is puzzling why there exist similar matrices when there is no reductive group. This observation was extended by Broué, Malle, and Michel <cit.> even to complex reflection groups W. In the words of <cit.>, it almost looks like there is a “fake algebraic group” associated to complex reflection group. These mysterious objects were coined “spetses”. Up to date, no one understands what they really are. In 2015, Lusztig <cit.> proposed a construction of categories associated to Coxeter groups which conjecturally generalizes categories of unipotent character sheaves and naturally encodes the ad hoc Fourier matrices. If true, this would be an important step towards understanding spetses for non-crystallographic finite Coxeter groups. We will now summarize the basic line of thought towards the conjecture. §.§ The conjecture and its background Let G be a connected reductive group over 𝔽_p. Fix a prime ℓ≠ p and let D^b_G,c(G) be the G-equivariant constructible bounded derived category of ℓ-adic sheaves on G. Lusztig's character sheaves <cit.> are certain simple perverse sheaves in D^b_G,c(G). As for characters, there is a notion of unipotent character sheaves <cit.>. Taking the characteristic function of the Frobenius F on a unipotent character sheaf gives the corresponding unipotent almost character <cit.> for the finite group G^F.[This only holds up to scalars and requires some restrictions. We can ignore this here.] The transition matrix F_W between suitably normalized unipotent almost characters and unipotent characters is the Fourier matrix from <cit.>. Like the parametrization of unipotent characters, it only depends on the Weyl group W of G. Let H_W be the Hecke algebra of W with parameters as in <cit.>. The multiplicative properties of the Kazhdan–Lusztig basis { b_w }_w ∈ W of H_W lead to a decomposition of W into two-sided cells <cit.>. Let _G be the subcategory of D^b_G,c(G) consisting of direct sums of unipotent character sheaves. To each unipotent character one can associate a unique two-sided cell c of W <cit.> and this leads to a decomposition of U(W) into subsets U^c(W). This categorifies by <cit.> and leads to a decomposition of _G into subcategories _G^c. The Fourier matrix has block diagonal form with blocks F_W^c indexed by the cells of W. Fix a Borel subgroup B of G. Let D^b_B,c(G/B) be the B-equivariant constructible bounded derived category of ℓ-adic sheaves on G/B. This is a monoidal category with respect to convolution. The geometric Hecke category ℋ_G of G is the subcategory of D^b_B,c(G/B) consisting of semisimple perverse sheaves <cit.>. This is a monoidal subcategory which categorifies H_W in the sense that there is an algebra isomorphism H_W →ℋ_G _⊕ , b_s ↦ B_s into the Grothendieck ring of ℋ_G, see <cit.>. Here, s ∈ W is a simple reflection and B_s is the constant sheaf supported on BsB/B. An important fact, which relies on the decomposition theorem <cit.>, is that under this isomorphism the Kazhdan–Lusztig basis element b_w for w ∈ W gets mapped to a (uniquely characterized) direct summand B_w of products of B_s corresponding to a reduced expression of w. This mirrors the properties of the Kazhdan–Lusztig basis and the indecomposable objects { B_w }_w ∈ W of ℋ_G categorify the basis { b_w }_w ∈ W. Fix a cell c and let ℋ_G^c be the subcategory of ℋ_G consisting of sheaves supported on c. This category is monoidal as well, but with respect to truncated convolution <cit.>. We call it the asymptotic Hecke category since it is a categorification of Lusztig's asymptotic algebra <cit.>. It follows from <cit.> that there is a finite group Γ_W^c, a finite Γ_W^c-set Y_W^c, a 3-cocycle ω_W^c on Γ_W^c, and a monoidal equivalence ℋ_G^c ≃Coh_Γ_W^c^ω_W^c(Y_W^c × Y_W^c) , the latter being the category of Γ_W^c-equivariant sheaves on Y_W^c × Y_W^c with convolution as tensor product and associator ω_W^c. This description is key to understanding the following construction more explicitly. The Drinfeld center of a monoidal category 𝒞 is the category 𝒵(𝒞) of pairs (Z,γ), where Z ∈𝒞 and γ is a functorial isomorphism γ_X X ⊗ Z ≃⟶ Z ⊗ X for all X ∈𝒞 which is compatible with the associator, see <cit.>. Note that (<ref>) induces a braiding on the Drinfeld center. From (<ref>) one obtains 𝒵(ℋ_G^c) ≃Γ_W^c-Vec_Γ_W^c^ω_W^c as braided monoidal categories, the latter being the category of Γ_W^c-equivariant Γ_W^c-graded vector spaces, see <cit.>. This is a modular tensor category <cit.>, and it follows from <cit.> that its S-matrix (which involves the braiding) is equal to the Fourier matrix F_W^c. Lusztig <cit.> gave geometric meaning to 𝒵(ℋ_G^c) by constructing a mo­no­idal structure on _G^c and establishing a natural monoidal equivalence _G^c ≃𝒵(ℋ_G^c) . In particular, _G^c is a modular tensor category whose S-matrix is F_W^c. We note that the equivalence <ref> seems to fit into a more general “untruncated” picture that is being established by the work of Bezrukavnikov–Finkelberg–Ostrik <cit.>, Ben-Zvi–Nadler <cit.>, and Bezrukavnikov–Ionov–Tolmachov–Varshavsky <cit.>. Let 𝔥 be the root lattice of G. When placing 𝔥 in degree 2 of the algebra R of regular functions on ℚ_ℓ⊗_ℤ𝔥, then R is as a graded algebra canonically isomorphic to H_B^∙(pt,ℚ_ℓ) ≃ R, the total B-equivariant cohomology of ℚ_ℓ on a point. By <cit.>, taking B-equivariant total cohomology on G/B yields a fully-faithful monoidal graded functor ℋ_G → R , the latter category being the category of graded R-bimodules. Hence, ℋ_G is monoidally equivalent to a full subcategory of R: this is the category _W of Soergel bimodules introduced by Soergel <cit.>. The key feature of this category is that it can be constructed just from W (and a reflection representation 𝔥). Moreover, it can be defined naturally for any Coxeter group (when choosing an appropriate reflection representation 𝔥) and it yields a categorification of the Hecke algebra H_W generalizing (<ref>). It is a deep theorem by Elias and Williamson <cit.> that the indecomposable objects { B_w }_w ∈ W of _W categorify the Kazhdan–Lusztig basis as before. We should thus think of _W as the “Hecke category” of spetses of type W. This category provides us with a kind of “categorical geometry” even if there is no reductive group. For a two-sided cell c in a finite Coxeter group W, Lusztig <cit.> defined an asymptotic Hecke category _W^c and a monoidal structure on it, mimicking that of the asymptotic geometric Hecke category ℋ_G^c. Lusztig then took its Drinfeld center _W^c 𝒵(_W^c) . This should be considered as a category of “unipotent character sheaves” on the spetses of type W. Consequently, it should satisfy several properties. First of all, _W^c should be a modular tensor category as conjectured by Lusztig <cit.>. This is indeed true and was proven by Elias–Williamson <cit.>. We are thus down to the following conjecture. Let W be a non-crystallographic finite Coxeter group and let c be a cell in W. The S-matrix of _W^c is equal to the Fourier matrix F_W^c from <cit.>. In particular, the number of simple objects of _W^c is equal to the number of unipotent characters supported in c. The conjecture provides a uniform categorification—and thus deeper meaning—of the ad hoc constructions of unipotent characters and the Fourier transform for non-crystallographic finite Coxeter groups. Moreover, the Fourier transform matrix for the big cell for H_4 given by Malle <cit.> is not yet known to be an S-matrix of a modular tensor category: the conjecture provides, for the first time, a precise candidate. §.§ Results in this paper First, we note that the asymptotic Hecke category _W^c is multi-fusion in the language of <cit.>, see sub:asymptotic_hecke. It thus has a component fusion subcategory _W^h, corresponding to a diagonal H-cell h in c, see sub:h-reduction. An elementary but crucial observation is that the centers of _W^h and _W^c are equivalent, see equ:h_reduction_center and the general prop:centr_reduction. We can thus work with _W^h, which is simpler. We show in sec:dihedral (see thm:dihedral) that lusztig_conjecture holds for dihedral groups. The key is to identify _W^h with the even part of the Verlinde category and noticing that the fusion data of the center of the latter categories are already in the literature. To be more precise, while the asymptotic Hecke algebra can be seen directly to be isomorphic to the Grothendieck ring of the even part of the Verlinde category, written Ad(𝒞_n), we need more known results to see that the this algebra is not categorified by a different category, see sub:adjoint_category. This allows us to compute with the asymptotic Hecke category without needing the category of Soergel bimodules. The center of Ad(𝒞_n) has also been computed in the literature but without any connections to our setting. We describe the computation, give small examples, and show how its S-matrix coincides with the Fourier matrix by Lusztig. By similar means we confirm Conjecture <ref> for some (we cannot resolve all) cells of H_3 and H_4 in Section <ref>. In some cells we still have two different options for the categorification and we point out which one is the “right” one assuming lusztig_conjecture holds. Only the middle cell in type H_4, the one of a-value 6, remains a complete mystery we cannot resolve yet. Finally, we note that the asymptotic Hecke category can also be constructed for arbitrary (not necessarily finite) Coxeter groups and a finite Kazhdan–Lusztig cell. In sec:infinite_cases we study infinite Coxeter groups having a finite cell of a-value equal to or less than 2. We describe the corresponding asymptotic Hecke category and its center. Even though we did not find new fusion or modular tensor categories in these examples, we expect some new examples will arise from the setting of asymptotic Hecke categories. We begin in Section <ref> with a detailed review of the construction of the asymptotic Hecke category. In Section <ref> we discuss generalities about its center and summarize known results in the Weyl group case. For all except 3 so-called exceptional cells c in type E_7 and E_8 the asymptotic Hecke category is known to be of the form Coh_G_c(X_c× X_c) for some group G_c and a G_c-set X_c. The possibilities for (G_c,X_c) are due to Lusztig and listed in ex:classification. We show in rem_center_hreduction how the Drinfeld center of the multifusion category Coh_G_c(X_c× X_c) is equivalent to that of Vec(G_c) using the method of H-reduction as described in sub:h-reduction. The S-matrices are listed in cor_smatrix_weyl and we get the same matrices as the combinatorially computed results of Lusztig in <cit.>. §.§ Acknowledgements We would like to thank Ben Elias, Daniel Tubbenhauer, and Geordie Williamson for many helpful discussions on this topic. The first author further thanks Ben Elias for his hospitality during a 2 months stay at the University of Oregon last summer. We would furthermore like to thank Fabian Mäurer for the development of software for computing the center of a fusion category <cit.> which helped us find some key ideas. We would like to thank Gunter Malle for comments on a preliminary version of this paper. This work was supported by the SFB-TRR 195 `Symbolic Tools in Mathematics and their Application' of the German Research Foundation (DFG). § THE ASYMPTOTIC HECKE CATEGORY We describe the construction of the asymptotic, or truncated, Hecke category categorifying the asymptotic Hecke algebra associated to a two-sided Kazhdan–Lusztig cell of a Coxeter group. The main construction is due to Lusztig <cit.>. We start with the construction of the asymptotic Hecke algebra and then go into detail into the construction of the asymptotic Hecke category. §.§ The asymptotic Hecke algebra We use the notation of <cit.>. For a Coxeter system (W,S) we denote by H_W the (equal parameter) Hecke algebra, a unital associative algebra over A[v^± 1] generated by elements δ_w for w∈ W and subject to the quadratic (δ_s-v^-1)(δ_s+v)=0 and braid relation. The Kazhdan–Lusztig basis is denoted by {b_w| w∈ W }⊆ H_W, and we write b_x = δ_x + ∑_y<xh_y,xδ_y with the Kazhdan–Lusztig polynomials h_y,x∈ v[v]. Furthermore, we define polynomials h_x,y,z in A such that b_xb_y=∑_z∈ Wh_x,y,zb_z. We write z←_L y if there exist an x∈ W such that h_x,y,z≠0. We extend this relation to a preorder <_L and call an equivalence class with respect to <_L a left or L-(Kazhdan–Lusztig) cell. Similarly, we define <_R for right or R-cells. Finally, let x<_J y be the extension of the relation x←_J y, which means x←_L y or x←_R y, and let the equivalence classes of <_J be called J- or two-sided-cells. These relations are due to Green <cit.> for monoids and have been extended to algebras and categories. On W we define the a-function a: W→∪{∞} to send z∈ W to the smallest integer a(z)∈ such that v^a(z)h_x,y,z∈[v] for all x,y∈ W, or to infinity if no such integer exists. It is conjectured that no case with an infinite value occurs, see <cit.>. We will only consider bounded Coxeter groups, i.e. of finite a-value. For any x,y,z∈ W let now γ_x,y,z^-1 h_x,y,zv^a(z)(0)∈, be the coefficient of the v^-a(z)-term in h_x,y,z. Using the coefficients γ_x,y,z one defines a new ring structure on the set ⟨ j_w | w ∈ W ⟩_, see <cit.>. The asymptotic Hecke algebra or J-ring J J_W is the free abelian group generated by {j_x | x∈ W } subject to the relations j_xj_y=∑_z∈ Wγ_x,y,z^-1j_z, for all x,y,z. We can say more on the properties of the ring structure. Let R be a unital ring which is free as a -module. We call R a based ring if, for a fixed basis B={b_i}_i∈ I of R, we have: * b_ib_j=∑_k∈ Ic_i,j^kb_k, for c_i,j^k∈_≥0, * The unit 1∈ R is a non-negative linear combination in the basis. We denote by I_0 all b_i occurring in the decomposition of 1. Write τ:R→ for the group homomorphism sending b_i to 1 if i∈ I_0 and to 0 otherwise. * There is an involution i↦ i^* on I such that the induced map R→ R,  kb_i↦ kb_i^* is an anti-involution on R and τ(b_ib_j) is 0 for j≠i^* and 1 if j=i^* (This means that in b_ib_i^* exactly one basis summand of the unit occurs exactly once) If the basis is finite, i.e. R is of finite rank, we call it a multifusion ring. If furthermore 1∈ B we call it a fusion ring. For finite W we always have γ_x,y,z≥ 0. If W is crystallographic this was shown in <cit.> and <cit.>. For the non-crystallographic types H_3,H_4 and I_2(m) there have been explicit calculations, see <cit.>. The J-ring of a finite Coxeter group is a multifusion ring, the unit element is of the form ∑_t∈ Dj_t for D the set of Duflo involutions in W. It is furthermore conjectured that the J-ring is still locally unital in general bounded Coxeter groups, i.e. the formal sum ∑_t∈ Dj_t acts in the same way an identity would. We refer to <cit.>. By <cit.> we have that γ_x,y,z≠0 implies that x,y,z lie in the same two-sided cell c⊂ W. The J-ring therefore decomposes into a direct sum, i.e. if we denote by J_c⟨ j_x| x∈ c⟩ the restriction of J_W to a two-sided cell c⊆ W we have a decomposition J_W ≃⊕_c⊂ WJ_c. We will call J_c the asymptotic Hecke algebra associated to c. Any such summand J_c itself is a multifusion ring with the unit being the sum of all Duflo involutions lying in the cell c. §.§ Construction of the asymptotic Hecke category Let now ℋ_W be the category of Soergel bimodules associated to a given Hecke algebra H_W, see <cit.>. In <cit.> Elias and Williamson showed that the monoidal product of the asymptotic Hecke category described in <cit.> by Lusztig is rigid. This implies that the asymptotic Hecke category for a two-sided cell with finitely many left cells is multifusion, see rem:multifusion_asymptotic. For finite Weyl groups we list in ex:classification for which cases a description of the asymptotic Hecke category is known. We go through their computations and motivate the construction of the asymptotic Hecke category in parallel to that of the asymptotic Hecke algebra. One key observation of Elias and Williamson is that the direct sum decomposition for Soergel bimodules is not canonical, therefore we get problems if we would naively try to define an asymptotic monoidal product by just taking the ordinary monoidal product and sending it to the lowest graded summand. Following <cit.> one can define a canonical direct sum decomposition using the perverse filtration on Soergel bimodules. This is seen in <cit.>. For s∈ W a reflection and B_s∈ℋ_W the Bott-Samelson bimodule corresponding to s, we have B_s⊗ B_s≃ B_s(+1)⊕ B_s(-1). In the Hecke algebra we have accordingly b_sb_s=(v+v^-1)b_s. The a-value of s is 1 and in the J-ring this implies j_sj_s=j_s. One would like to find morphisms inside ℋ_W from B_s(-1) to B_s⊗ B_s and vice versa to construct a categorification of the J-ring. However, by Soergel's Hom formula, while the graded rank of the space Hom_ℋ_W(B_s⊗ B_s,B_s(-1)) is v^-1+2v+v^3 and therefore a projection to B_s(-1) is unique up to scalar, the inclusion is not unique. Two different direct sum decompositions can be found in <cit.>. This means that B_s(-1) is not a canonical subobject and one cannot directly replicate the multiplication of the J-ring on the category level. This shows that the lowest graded summand of a Soergel bimodule is not canonical. While the multiplication in the J-ring can be defined by ignoring all higher gradings, we cannot just define a monoidal product in the same way. The main result of <cit.> was to show relative hard Lefschetz for Soergel bimodules, as this allows to talk about certain “canonical” submodules of Soergel bimodules. To be more precise, we call a Soergel bimodule B perverse if it is isomorphic to a direct sum of Bott-Samelson bimodules without shifts. For an arbitrary Soergel bimodule B the perverse filtration is of the form …⊂τ_≤ i B⊂τ_≤ i+1B⊂…, where τ_≤ i B lies in the full subcategory of Soergel bimodules only generated by objects B_x(m) for m≥ -i. Similarly, we consider B/τ_≤ i B which lies in the full subcategory of Soergel bimodules only generated by objects B_x(m) for m<-i. We then write H^i(B) (τ_≤ iB/τ_< iB)(i) for the perverse cohomology of B. Fix a Coxeter system (W,S) and let ℋ_W be the associated category of Soergel bimodules. If ρ∈ R B_id∈ℋ_W is dominant regular (i.e. ∂_s(ρ)>0 for all s∈ S) and x,y∈ W are arbitrary the morphism η: B_x⊗_R B_y → B_x⊗_R B_y(2), b⊗ b'↦ bρ⊗ b'=b⊗ρ b' induces an isomorphism η^i: H^-i(B_x⊗_R B_y) H^i(B_x⊗_R B_y) for all i. For s∈ W an arbitrary reflection this theorem gives then an isomorphism B_s(-1)≃ H^-1(B_s⊗_R B_s)≃ H^1(B_s⊗_R B_s)≃ B_s(+1). We can therefore use the canonical projection map to the lowest graded summand and the canonical inclusion map from the highest graded summand to define the asymptotic Hecke category. In general any object lying over j_z for a summand of j_xj_y should not only correspond to the lowest graded part of B_xB_y, but also the highest. With relative hard Lefschetz we can identify these by η^i. The maps corresponding to the tensor product should look like: [column sep=40pt] H^-i(B_xB_y) [r,dashed,"η^i"] [d,hookrightarrow,swap,"canonical inclusion"] H^i(B_xB_y) B_xB_y[r,swap,"η^i"] B_xB_y[u,twoheadrightarrow,swap,"canonical projection"] This motivates the definition of a monoidal category categorifying J_c. The following is a compression of the construction of <cit.>. Fix a two-sided Kazhdan–Lusztig cell c with a-value i in a Coxeter system (W,S). * We define the subcategory ℋ_W^<c as the full subcategory of ℋ_W generated by objects B_x such that x<_Jc. Let ℐ^c denote the tensor ideal of morphisms in ℋ_W factoring over objects of ℋ_W^<c. We define the quotient category by (ℋ_W^c)'ℋ_W/ℐ^c. * Inside (ℋ_W^c)' we restrict to the full graded additive subcategory ℋ̃_W^c generated only by objects B_x for x∈ c. * We now enrich the grading free full subcategory ℋ_W^c of ℋ̃_W^c (i.e. the subcategory generated only by B_x without shifts) with a new monoidal product by using the i-th perverse cohomology: B_x⋆ B_y H^-i(B_xB_y) ∈ℋ_W^c. The quotient construction of (ℋ_W^c)' is necessary to account for the fact that in the construction of the J-ring one discards any summand of j_xj_y lying in lower cells. The perverse filtration of ℋ_W descends to ℋ̃_̃W̃^c and ℋ_W^c and any H^-i(B_xB_y) contains no summands of lower cells. For the monoidal structure on ℋ_W^c we use inclusions and projections as in (<ref>). The associators afforded by the product are in general non-trivial, we will see a concrete example in type I_2(n) in ex:fusiondata. § THE CENTER OF THE ASYMPTOTIC HECKE CATEGORY We follow the categorical notation of <cit.>. We recall the definition of a multifusion category and list some properties on the Drinfeld center of multifusion categories. This section will motivate that one can reduce the study of the asymptotic Hecke category of a J-cell c to that of a so called H-cell h⊂ c, which is considerably smaller. §.§ Multifusion categories Let be an algebraically closed field. Outside this section we assume that all categories we consider are =ℂ-linear. <cit.> A category 𝒞 is multifusion if it is a locally finite -linear abelian rigid monoidal and semisimple category, such that the bifunctor ⊗: 𝒞×𝒞→𝒞 is bilinear on morphisms and we have only a finite number of simple objects. If furthermore _𝒞(1)≃ for 1 the monoidal unit, we call 𝒞 a fusion category. Examples for fusion categories are the category of G-graded finite dimensional -vector spaces Vec(G)Vec_(G) or Rep(G), Rep_(G) the category of representations of G over for a finite group G if the characteristic of and the order of G are coprime. Let K(𝒞) denote the Grothendieck ring of a multifusion category 𝒞. By definition, it is a multifusion ring by choosing the equivalences classes of the simple objects as basis elements. By <cit.> the asymptotic Hecke category is rigid and pivotal. We have seen in rem:duflo that the asymptotic Hecke algebra J_c is multifusion if c is finite. Therefore, ℋ_W^c is a multifusion category. The sum ⊕_d∈ DB_d for D the set of Duflo involutions is then the unit of ℋ_W^c. By <cit.> in a multifusion category 𝒞 the space End_𝒞(1) is always a semisimple algebra, i.e. isomorphic to a direct sum of finitely many copies of . We can therefore write 1=⊕_i∈ I1_i, for 1_i non-isomorphic indecomposable objects. Let 𝒞 be a multifusion category and let 1=⊕_i∈ I1_i be a decomposition of the unit into indecomposable objects. For i,j∈ I we define the component subcategory 𝒞_ij1_i⊗𝒞⊗1_j to be the full subcategories of 𝒞 generated by all objects of the form 1_i⊗ X ⊗1_j. As abelian categories this gives a decomposition 𝒞≃⊕_i,j∈ I𝒞_i,j, the monoidal product maps 𝒞_ij×𝒞_jk into 𝒞_ik and the duals of 𝒞_ij lie in 𝒞_ji, see <cit.>. Any 𝒞_ii is then also a fusion category with 1_i as the monoidal unit. We will see in the next section that the Drinfeld center of a multifusion category is equivalent to that of the fusion subcategories. This will be applied in sub:h-reduction to the asymptotic Hecke category. §.§ The Drinfeld center of multifusion categories We recall the definition of the Drinfeld center, see <cit.>. Let 𝒞 be a monoidal category. The center 𝒵(𝒞) is a category with objects (Z,γ) where Z∈𝒞 and γ is a family of natural morphisms γ_X:X⊗ Z → Z ⊗ X for all X∈𝒞 satisfying the hexagon axiom. Most properties of 𝒞 transfer to 𝒵(𝒞). For example the center is always monoidal, and it is also fusion if 𝒞 is, see <cit.>. The Drinfeld center 𝒵(𝒞) is a special case of a braided monoidal category. This is a monoidal category 𝒟 where every object X∈𝒟 affords a braiding c_X,-:X⊗ - → -⊗ X satisfying the hexagon axiom. The following definition works analogously for braided categories. Let 𝒞 be a fusion category and 𝒵(𝒞) its Drinfeld center. For (Z_i,γ^i)_i a complete list of all simple objects of 𝒵(𝒞) we define the S-matrix of 𝒵(𝒞) to be S (tr(γ^i_X_j∘γ^j_X_i))_i,j, where tr denotes the trace of an endomorphism f:X→ X, i.e. the element in corresponding to f after applying the evaluation and coevaluation, see <cit.>. Let G be a finite group. The Drinfeld center of Vec(G) is completely described, see <cit.>. It is 𝒵(Vec(G))≃ (Vec(G))^G, the category of G-equivariant G-graded vector spaces where G acts on Vec(G) by conjugation. The simple objects are in correspondence to the set of pairs (C,V), where C is a conjugacy class of G and V is a simple representation up to conjugacy of the stabilizer subgroup of C. For G=S_3 this gives for example 8 simple objects in the center, 3 lying over the trivial conjugacy class, 3 over the conjugacy class of the 3-cycle and 2 over the conjugacy class of the 2-cycle. By <cit.> the S-matrix is S_(C,V),(C',V') = G/C_G(a)C_G(a')∑_g∈ G(a,a')tr_V(ga'g^-1)tr_V'(g^-1ag), where a∈ C,a'∈ C' and G(a,a')={g∈ G| aga'g^-1=ga'g^-1a}. For a multifusion category 𝒞 we can even reduce the center to the center of the fusion subcategories 𝒞_ii for i∈ I. We call 𝒞 indecomposable if we cannot partition the set I into non-empty subsets I=J∐ K, such that for all j∈ J and k∈ K we have 𝒞_j,k=0. This means that one cannot write 𝒞 as a direct sum of multifusion categories. The center of a decomposable multifusion category is the direct sum of the centers of the summands, if on the other hand 𝒞 is indecomposable we can express the center in terms of any fusion subcategory 𝒞_ii. For a multifusion category 𝒞 with component fusion subcategories 𝒞_ii for 1≤ i ≤ n we have 𝒵(𝒞) ≃𝒵(𝒞_ii). Therefore, the center of an indecomposable multifusion category is fusion. This is Theorem 2.5.1 in <cit.>. One can define the notion of a module category ℳ over a multifusion category. On the Grothendieck level this categorifies the notion of a module over a ring. Since inside 𝒞 the component subcategory 𝒞_ij maps 𝒞_jk into 𝒞_ik, one can regard 𝒞_ij as a (𝒞_ii,𝒞_jj)-bimodule category. Then the action of the component subcategories on each other extends to the following equation (here we use the Deligne's tensor products, see <cit.>): 𝒞_ij⊠_𝒞_jj𝒞_jl≃𝒞_il as (𝒞_ii,𝒞_ll)-bimodules. Now define the (𝒞_ii,𝒞)-bimodule and (𝒞,𝒞_ii)-bimodule categories ℳ_i⊕_j𝒞_ij and 𝒩_i⊕_j𝒞_ji. They are what is called invertible in <cit.>, i.e. ℳ_i⊠_𝒞𝒩_i≃𝒞_ii and 𝒩_i⊠_𝒞_iiℳ_i≃𝒞. Now following <cit.> these equations show 𝒵(𝒞_ii) ≃𝒵(𝒞). We apply this result in the next section to the asymptotic Hecke category. §.§ H-cell reduction We reduce the computation of the center of the asymptotic Hecke category associated to a J-cell to that of an H-cell. This process, called H-reduction or Clifford–Munn–Ponizovskiĭ theory has been applied to monoids, algebras and categories, see for example <cit.>. Let W be a Coxeter group and c⊂ W a J-cell. The decomposition of the asymptotic Hecke category ℋ_W^c into component subcategories comes from the decomposition of c into left and right cells. By rem:duflo,rem:directsum the monoidal unit is the direct sum of all objects lying over Duflo involutions in the J-ring: 1_ℋ_W^c=⊕_1≤ i ≤ nB_d_i, where {d_i} is the set of all Duflo involutions in W. Let now c^L_i and c^R_i for 1≤ i ≤ n be a list of the left and right cells, such that d_i∈ c^L_i∩ c^R_i. We call a non-empty intersection of a left and a right cell an H-cell. If an H-cell contains a Duflo involution we call it diagonal. Any diagonal H-cell with d_i∈ h_i=c^L_i∩ c^R_i⊂ W gives a component subcategory ℋ_W^h (ℋ_W^c)_ii=B_d_i⊗ℋ_W^c⊗ B_d_i of ℋ_W^c. This is a fusion category and we have 𝒵(ℋ_W^h) ≃𝒵(ℋ_W^c) by prop:centr_reduction. Hence, the computation of the Drinfeld center of the asymptotic Hecke category of a J-cell reduces to that of an H-cell. §.§ The centers of the asymptotic Hecke category for finite Weyl groups For finite Weyl groups the asymptotic Hecke categories have been known using classical geometric results. We give an overview on the classification and describe their centers and S-matrices using H-reduction. By <cit.> we have an assignment of a two-sided Kazhdan–Lusztig cell c in a Weyl group to a finite group G_c and an embedding c→ M(G_c), where M(G_c) consists of tuples (g,V) for g∈ G_c unique up to conjugacy and V a simple representation of the centralizer of g. For any left cell c^L⊆ c there is further an association to a subgroup H_c^L≤ G_c in <cit.> such that the asymptotic Hecke algebra J_h associated to the H-cell h c^L ∩ (c^L)^-1 is, as a based (or multifusion) ring, conjectured to be isomorphic to K_G_c(G_c/H_c^L× G_c/H_c^L), which is short for the Grothendieck ring of Coh_G_c(G_c/H_c^L× G_c/H_c^L), the category of G_c-equivariant coherent sheaves on the set (G_c/H_c^L)^2. Furthermore, the conjecture <cit.>, is extended to the claim that the disjoint union X∐_c^L⊂ cG_c/H_c^L gives a multifusion ring isomorphic to K_G_c(X× X)≃ J_c. This was proven by Lusztig himself in the case that G_c is abelian. A complete proof was achieved by Bezrukavnikov, Finkelberg and Ostrik in <cit.>. For all but three exceptions in type E_7 and E_8, they even showed that J_c is categorified by Coh_G_c(X× X) for the same G_c-set X. We call the 3 exceptions the exceptional cells. The results presented above are summarized in the following example: The categories ℋ_W^h for a diagonal H-cell h=c^L∩ (c^L)^-1 in a non-exceptional two-sided Kazhdan–Lusztig cell c of a finite Weyl group W are given by Coh_G_c(G_c/H_c^L× G_c/H_c^L), i.e. categories of equivariant coherent sheaves on a finite set, for the following possibilities of G_c and H_c^L: * In type A_n any H-cell has size 1, we always have G_c={⋆}=H_c^L * In type B_n the size of an H-cell is 2^k for some k^2+k≤ n. The groups G_c and H_c^L are some elementary abelian 2-groups. * In type D_n we have the same result as in B_n except that k^2≤ n. * In type E_6 to E_8 the group G_c is a symmetric group on at most 5 letters S_1,…, S_5. * For G_c=S_3 we can have H_c^L∈{S_1,S_2,S_3} * For G_c=S_4 we can have H_c^L∈{S_2,S_2× S_2, S_3, D_4, S_4} * For G_c=S_5 we can have H_c^L∈{S_2,S_2× S_2,S_3,D_4,S_2× S_3,S_4,S_5} * In type F_4 we get G_c < S_4 with the same possible subgroups for H_c^L as before * In type G_2 we get G_c∈{S_1,S_3}, where for G_c=S_3 only H_c^L=S_2 occurs. We want to motivate the connection of the set M(G_c) to the center of Coh_G_c(X× X). The categories 𝒞Coh_G_c(X× X) are multifusion. If X=∪ X_i is a disjoint union into transitive G_c-sets X_i, the categories 𝒞_ijCoh_G_c(X_i× X_j) are component subcategories. By prop:centr_reduction the centers of 𝒞_ii and 𝒞 are equivalent. If one chooses X=G_c we have Coh_G_c(X× X)≃Vec(G_c). Therefore, the center 𝒵(𝒞) is equivalent to the center of the category of G_c-graded vector spaces. Indeed, the set M(G_c) has exactly the same description as the simple objects of the center 𝒵(Vec(G_c))≃ (Vec_G_c)^G_c as seen in ex:center_vecg. Furthermore, the S-matrix computed for 𝒵(Vec(G)) coincides with the pairing on M(G_c) defined in <cit.>, modulo a constant term. The pairing is {(x,σ),(y,τ)}∑_g∈ G_c, xgyg^-1=gyg^-1xtr(g^-1x^-1g,τ)tr(gyg^-1,σ)/C_G_c(x)C_G_c(y), which is exactly the S-matrix of the Drinfeld center divided by G. The factor G is equal to the square root of the categorical dimension of 𝒵(Vec(G_c)), hence the difference in formulas comes only from a convention on normalization. We will refer to the S-matrix divided by the square root of the categorical dimension as normalized, see <cit.>. As the center of a monoidal category is itself monoidal we have a multiplication on 𝒵(Vec(G_c)), while we have no direct way to define a multiplication on M(G_c). In <cit.> Geck and Malle worked out a possible multiplication table for M(G_c) in type G_2, in which case we have G_c=S_3. The monoidal product on 𝒵(Vec(S_3)) coincides with the table given by Geck and Malle. Let c be a non-exceptional two-sided Kazhdan-Lusztig cell in a finite Weyl group W. The asymptotic Hecke category associated to c as well as the S-matrix of its center is one of the following cases: * For any c where a diagonal H-cell has size 1 we have ℋ_W^c=Coh(X× X) where X has the same cardinality as the number of left and right cells in c. We have ℋ_W^h≃Coh(⋆)≃Vec for any diagonal H-cell. The center 𝒵(ℋ_W^c)≃𝒵(ℋ_W^h)≃Vec has size 1 and the S-matrix is S_c=[ 1; ]. This happens for any cell in type A_n and also for all cells containing only the trivial element. More examples of cells can be found in <cit.> * If the asymptotic Hecke category of c is isomorphic to Coh_G(X× X) for an elementary abelian 2-group, i.e. G_c≃ (/2)^k, we have 𝒵(ℋ_W^c)≃𝒵(Vec(G_c))≃⊕_1≤ i ≤ k𝒵(Vec(/2)). The center then contains 4^k simple objects and the S-matrix is the k-fold Kronecker product of the S-matrix of 𝒵(Vec(/2)), which is S(𝒵(Vec(/2)))=[ 1 1 1 1; 1 1 -1 -1; 1 -1 1 -1; 1 -1 -1 1; ]. Since the dimension of Vec(/2) is 2 the normalization agrees with the table in <cit.>. * If the asymptotic Hecke category of c is isomorphic to Coh_G(X× X) with G=S_3 the center of the asymptotic Hecke category is 𝒵(Vec(S_3)) which has 8 simple objects and the S-matrix is S(𝒵(Vec(S_3))) = [ 4 2 2 0 0 -2 -2 2; 2 1 1 -3 -3 2 2 2; 2 1 1 3 3 2 2 2; 0 -3 3 3 -3 0 0 0; 0 -3 3 -3 3 0 0 0; -2 2 2 0 0 4 -2 -2; -2 2 2 0 0 -2 -2 4; -2 2 2 0 0 -2 4 -2; ]. Normalization by the dimension of (Vec(S_3))=6 gives the table of <cit.>. Note that some rows have been left out in that source, they are permutations of some rows given. * If the asymptotic Hecke category of c is isomorphic to Coh_G(X× X) with G=S_4 the center of the asymptotic Hecke category is 𝒵(Vec(S_4)) which has 21 simple objects. To count this we have to compute all centralizer subgroups and count their irreducible representations. The matrix can also be found in <cit.>. * If the asymptotic Hecke category of c is isomorphic to Coh_G(X× X) with G=S_5 the center of the asymptotic Hecke category is 𝒵(Vec(S_5)) which has 39 simple objects, again see <cit.>. §.§ The exceptional cells in Weyl groups In the three exceptional cases in type E_7 and E_8 we have a categorification of ℋ_W^c by <cit.>: For an exceptional cell c in type E_7 or E_8, there is a tensor equivalence ℋ_W^c≃Vec^ω(/2)⊠Coh(Y'× Y'). Note, that the category ℋ_W^c is denoted by 𝒫_c in <cit.>. The set Y' has cardinality 512 for the exceptional cell in type E_7 and 4096 for the two exceptional cells in type E_8. The cardinality of the set Y' gives the number of left or right cells in c, the H-cells have only size 2 and are therefore categorified by Vec^ω(/2), where ω denotes the non-trivial twist. Let c⊂ W be an exceptional cell in type E_7 or E_8. The center of the asymptotic Hecke category associated to c is 𝒵(ℋ^c)≃𝒵(Vec^ω(/2)) for ω a non-trivial twist. We have 4 simples in 𝒵(ℋ^c_W) and the S-matrix is S(𝒵(Vec^ω(/2)))=[ 1 1 1 1; 1 1 -1 -1; 1 -1 -1 1; 1 -1 1 -1; ]. § THE DIHEDRAL CASE We give a complete description of the asymptotic Hecke category associated to a dihedral group. We will see that the category is known in the literature as the even or adjoint part of the Verlinde category. The Drinfeld center of the Verlinde category and its adjoint are also known, we give the complete fusion data. The computations done in this section were supported by parallel works presented in <cit.>. Let W=⟨ r,s⟩ be the Coxeter group of type I_2(n), i.e. r^2=s^2=(rs)^n=1. §.§ The asymptotic Hecke algebra for dihedral groups All data on h_x,y,z and the asymptotic Hecke algebra are known, see for example <cit.>. We have always three two-sided cells for n≥ 3. The neutral element always forms its own two-sided cell c_0={1} as x≤_K 1 for all x∈ W and K∈{L,R,J} since b_x=b_1b_xb_1. The a-value is 0. Similarly, the longest word c_n={w_0} for w_0=sts…_n forms its own two-sided cell of a-value n as b_xb_w_0=pb_w_0 for some polynomial p∈[v^± 1]. Furthermore, any non-trivial word that has a unique reduced expression lies in the two-sided cell of a-value 1. These are all remaining elements c_1={s,st,sts,…,t,ts,tst,…}. The left and right cells are characterized by the right and left descending sets. We can visualize the cell structure in a box diagram, where the big boxes correspond to J-cells, columns to R-cells, rows to L-cells and small boxes to H-cells. 1 | s, sts, ststs,… ts, tsts,… st, stst, … t,tst,… | w_0 The multiplication table of the J-ring can also be found in <cit.>. The coefficients γ_x,y,z are either 0 or 1. Denote by s_k the unique word of length k starting in s and by t_l the unique word of length l starting with t for k,l<n. The multiplication in the J-ring is then: j_s_kj_a_l = 0 k a=s k a=t ∑_u=max{0,k+l-n}^min{k,l}-1j_s_k+l-1-2u We can read off directly that the neutral element is j_s+j_t. In type I_2(5) this gives for example: · j_s j_st j_sts j_stst j_s j_s j_st j_sts j_stst j_ts j_ts j_t + j_tst j_ts+ j_tsts j_tst j_sts j_sts j_st + j_stst j_s + j_sts j_st j_tsts j_tsts j_tst j_ts j_t The structure of the multiplication is similar to the Clebsch–Gordan rule for the monoidal products of U(𝔰𝔩_2) representations. We see this explicitly for an H-cell. Denote the left and right cells by c^L_s{s,ts,sts,…} and c^L_t{t,st,tst,…} as well as c^R_s{s,st,sts,…} and c^R_t{t,ts,tst,…}. Then the diagonal H-cells are h_s c^L_s∩ c^R_s={s,sts,ststs,…} and h_t c^L_t∩ c^R_t={t,tst,tstst,…}. Inside h_s for 1≤ i,j ≤ n and i+j≤ n-1 we have by (<ref>) j_s_ij_s_j=j_s_i-j + j_s_i-j+2 +… +j_s_i+j, while everytime we have i+j≥ n we truncate some of the bigger terms. We will explore these fusion rings in the next section as they give a categorification of the asymptotic Hecke algebra. §.§ Type A_n-fusion categories We say that a fusion category 𝒞_k has fusion rules A_k if it has k simple objects, which we may labeled by X_0,…,X_k-1, such that the fusion graph showing the monoidal product by X_1 is the Dynkin diagram of type A_k. [Coxeter, labels=X_0,X_1,X_2,X_k-2,X_k-1, scale=2] Aooo...oo This means, that X_1⊗ X_0≃ X_1≃ X_0⊗ X_1, X_1⊗ X_k-1≃ X_k-2≃ X_k-1⊗ X_1 and X_1⊗ X_i≃ X_i-1⊕ X_i+1 for all 1≤ i ≤ k-2. These categories are mentioned in <cit.> under the name Verlinde–Wess–Zumino–Witten. An explanation can also be found in <cit.>. We note, that one can inductively show that the monoidal product is a truncated version of the monoidal product of 𝔰𝔩_2 representations. If V_i are the simple representations with V_i⊗ V_j≃ V_i-j⊕ V_i-j+2⊕…⊕ V_i+j for any i,j∈, then a specialization of Lusztigs quantum group U_q(𝔰𝔩_2) at a root of unity nullifies or truncates certain summands. This happens exactly when the quantum number corresponding to the root of unity is zero. For example in a fusion category of type A_3 we have X_1⊗ X_2≃ X_1, while for 𝔰𝔩_2-representations we would have V_1⊗ V_2≃ V_1⊕ V_3. However, in the specialization seen in the next example one would have [3+1]=0 and hence V_3 does not occur. An overview of the categorical data of fusion categories with type A_k fusion rules can be found in <cit.>. All associators have been classified by <cit.>. For any natural number k the monoidal categories with fusion rules A_k are classified by l∈/(k+1) coprime to k+1. We denote them by 𝒞_k^l. The associator of 𝒞_k^l is defined as follows. Let q e^lπ i/k+1 be a 2(k+1)-th root of unity, then we set the quantum numbers to be [0]^l 0, [1]^l 1, [2]^l q^l+q^-l, and inductively [n]^l [2]^l[n-1]^l-[n-2]^l. We define the quantum factorial via [m]^l! [1]^l[2]^l·…·[m]^l. We say that a triple of natural numbers (a,b,c) all smaller than k is k-admissible if ma+b-c/2,   na+c-b/2, pb+c-a/2 are also natural numbers and a+b+c≤ 2k-2. This is equivalent to saying that X_c occurs as a summand of X_a⊗ X_b. The 6j-symbols have been computed by Kauffman and Lins in <cit.>. For fixed (a,b,c) we consider all numbers d,e,f such that X_d is a summand of X_a⊗ X_b ⊗ X_c and (a,b,f), (c,d,f), (a,d,e) and (b,c,e) are k-admissible. Then the 6j-symbol of (X_a⊗ X_b)⊗ X_c→ X_a⊗ (X_b⊗ X_c) for the summand X_f of X_a⊗ X_b and X_e of X_b⊗ X_c has the form a b e c d f = ℐ!(-1)^e+1[e+1]/ℰ!θ(a,d,e)θ(b,c,e)∑_n≤ s ≤ N(-1)^s[s+1]!/∏_i[s-a_i]!∏_j[b_j-s]!, where θ(a,b,c)(-1)^m+n+p[m+n+p+1]![m]![n]![p]!/[m+n]![m+p]![n+p]!, and ℐ!∏_i,j[b_j-a_i]!, ℰ! [a]![b]![c]![d]![e]![f]!, where a_1a+d+e/2, a_2b+c+e/2, a_3a+b+f/2, a_4c+d+f/2, and b_1b+d+e+f/2, b_2a+c+e+f/2, b_3a+b+c+d/2 and n is the maximum value of a_i and N the minimum of b_j. If the exact choice of root of unity is not relevant we only write 𝒞_k. The computations of the 6j-symbols in <cit.> have been done using the Temperley–Lieb algebra. In the definition of the Temperley–Lieb algebra one needs to choose a value for the evaluation of the loop. Exactly when one chooses the quantum number [2]=q+q^-1 coming from a (k+1)-th root of unity q=e^mπ i/k+1 we land in the case of type A_k fusion categories. For the later calculations it is not relevant what l is, everything is given in terms of quantum numbers. §.§ The adjoint part of type A_k fusion categories In <cit.> the subcategory of 𝒞_n generated by the even elements is called the adjoint subcategory. An explanation for this term can be found in <cit.>. For a based ring A with basis B={b_i} we call the smallest subring A_ad⊂ A, such that all b_ib_i^* lie in A_ad the adjoint subring. For a fusion category 𝒞 we write Ad(𝒞) for the full fusion subcategory such that K(Ad(𝒞))=K(𝒞)_ad and call it the adjoint subcategory. In 𝒞_n all objects are self-dual, and any monoidal product X_i⊗ X_i decomposes into a sum of even summands X_2j. This comes from the fact that 𝒞_n is universally /2-graded in the sense of <cit.> and the adjoint part is the trivial component of the universal grading on 𝒞. While we have seen in ex:fusiondata that the categories 𝒞_n^l are the only categorifications of Verlinde type fusion rings, it is not clear yet that the adjoint subcategories are the only possibilities for categorifications of the adjoint fusion rings. Recent work by Etingof and Ostrik <cit.> shows that this is indeed the case. As a shorthand notation we write K_i for the Grothendieck ring of the adjoint part of 𝒞_2i+2 and K'_i for the Grothendieck ring of the adjoint part of 𝒞_2i+1. Let 𝒞 be a pivotal fusion category categorifying the fusion ring K_l or K'_l' for l> 2 or l'≥ 1. Then there is a tensor equivalence 𝒞≃Rep(𝔰𝔬(3)_q) for q a primitive 4(l+1)-th root of unity. This is <cit.>. Here Rep(𝔰𝔬(3)_q) is the fusion category of tilting modules over the quantum enveloping algebra of 𝔰𝔬(3) specialized at the root of unity q. In our notation this is the category Ad(𝒞_n). There are two exceptions to this categorification result. For K_1 the Grothendieck ring is of the form K(Vec(/2)), which has two categorifications, for K_2 the ring has more categorifications, see <cit.>. However, none of these rings appear in any cases we consider in this work. Except for the dihedral group, for which we can use rem:elias. §.§ The asymptotic Hecke category for dihedral groups The two J-cells of size 1, c_0={1} and c_n={w_0}, have only one possible fusion categorification, as there is only one fusion category with one object, the finite dimensional vector spaces Vec. The asymptotic Hecke category therefore is this trivial category, we can label its simple object by B_1 or B_w_0 depending on which cell we focus on. For the middle cell one can do diagrammatic calculations to see that the associators coincide exactly with the ones from type A_k fusion categories. A small example can be seen in ex:non-strict. The close connection of the diagrammatic Hecke category for dihedral groups and the Temperley–Lieb category is due to Ben Elias. By <cit.> the (two-colored) Temperley–Lieb category embeds as the degree 0 morphisms into the category of Soergel bimodules of a dihedral group. By <cit.> we even have a degree-zero equivalence. This shows, that the morphism spaces in the asymptotic Hecke category are exactly described by the structure constants of recoupling theory, see ex:fusiondata. We can combine this information to a description of the asymptotic Hecke category for a dihedral group. Let n≥ 3 and consider the Coxeter group W of type I_2(n). Let c be the two-sided cell of a-value 1. The asymptotic Hecke category ℋ_W^c associated to c has the following fusion data. * The objects are labeled by elements of c: B_w for w∈ c. * The monoidal product is as in eq:mult_jring, where j_x denotes the equivalence class of B_x in the Grothendieck ring. * The associators are given by eq:6jtypea where for an object B_x we plug in the length of x minus one into the 6j-symbol. [Non-trivial associators] We give one small example of a calculation showing non-trivial associators. Let n=3. For c={s,t,st,ts} the tensor ideal ℐ_<c consists of morphisms factoring over the longest word. If one follows the construction of <cit.> a morphism in ℐ<c needs to factor over the Jones-Wenzl projector corresponding to B_sts. We will denote this idempotent by e_sts∈Hom_ℋ_W(B_sB_tB_s,B_sB_tB_s). This therefore gives the Hom spaces Hom_ℋ_W^c(B_x,B_y) to be quotients by the ideal generated by e_sts. In the J-ring of c we have j_stj_ts = j_s, j_tsj_st=j_t, and j_s+j_t being the unit of J_c. Now, inside ℋ_W we have rk(_ℋ_W(B_stB_tsB_st,B_st))=2v^-2+9+…, therefore the projection is not unique. One can either use the unique projection (of the asymptotic category) for the first two terms and then the third, or firstly for the last two terms and then the first. However, we compute rk(_ℋ_W(B_stB_tsB_st,B_sts)) =v^-3+6v^-1+… and rk(_ℋ_W(B_sts,B_st)) = v+2v^3+…, hence, by the quotient construction, we get an up to scalar unique map of Hom_ℋ_W^c(B_stB_tsB_st,B_st). The scalar itself can be computed out of e_sts. This idempotent is of the form id_B_sB_tB_s=e_sts+f, where f is an idempotent corresponding to the summand B_s⊕⊂B_sB_sB_t, and one will therefore get the term -1 for the associator. This is the exact value one gets following ex:fusiondata. §.§ The center of type A_n-fusion categories We can now investigate the center of the asymptotic Hecke category of the dihedral group by considering the categories Ad(𝒞_n). First, we describe the Drinfeld center of 𝒞_n. The main idea is to find a braiding on the category as this gives an equivalence to the center. Let 𝒞 be a tensor category with invertible S-matrix. The center of 𝒞 has the form 𝒵(𝒞) ≃𝒞⊠𝒞^rev, where (-)^rev denotes the category 𝒞 with reverse braiding, i.e. c'_X,Y=c_Y,X^-1 for any braided object (X,c_X,-) in 𝒞. This result is originally by Mueger, <cit.>, see also <cit.>. They show that the functors 𝒞→𝒵(𝒞), X↦ (X,c_-,X) and 𝒞^rev→𝒵(𝒞), X↦ (X,c_X,-^-1) combine into an equivalence of braided tensor functors 𝒞⊠𝒞^rev→𝒵(𝒞). Note, that the center does not depend on the braiding chosen on 𝒞 as long as the associated S-matrix is invertible. Hence, we can freely choose the braiding for computing the modular data of the center. The categories 𝒞_n can be endowed with a braiding. All braidings were computed by <cit.>, see also <cit.> for an overview. They are classified by an integer l∈/4(n+1) with (l,n+1)=1. Remember that we defined 𝒞_n in terms of quantum numbers [k], where [2]=q+q^-1 for q a 2(n+1)-th root of unity. To define a braiding we see that we need even higher roots of unity. We choose the value l=1 and set z z_4(n+1) e^π i/2(n+1) to be a 4(n+1)-th root of unity, i.e. z_4(n+1)^2=q. Then the braiding on X_1⊗ X_1 has the form X_1⊗ X_1→ X_1⊗ X_1: [baseline=(base)] (base) at (0,2.8ex) ; (0,0) – (1,1); (0,1) – (1,0); z [baseline=(base)] (base) at (0,2.8ex) ; [smooth, tension=2] (0,0) to [bend right=30] (0,1);[smooth, tension=2] (1,0) to [bend left=30] (1,1); +z^-1[baseline=(base)] (base) at (0,2.8ex) ; [smooth, tension=2] (0,0) to [bend left=30] (1,0);[smooth, tension=2] (0,1) to [bend right=30] (1,1); Since X_1 generates the category 𝒞_n this equation defines all braidings on 𝒞_n uniquely. lemma:center_of_braided tells us directly that 𝒵(𝒞_n) has n^2 simple objects. The object X_i⊠ X_j maps to a simple object X_i⊗ X_j in 𝒵(𝒞_n) with a certain braiding coming from X_i∈𝒞_n and X_j∈𝒞_n^rev. The S-matrix of 𝒞_n has been computed in <cit.>. The entry corresponding to the tuple (X_i,X_j) is S_i,j=(-1)^i+j[(i+1)(j+1)] We denote the corresponding matrix by S_n=(S_i,j)_i,j. In the Deligne tensor product we get the S-matrix to be S_n⊠ S_n, i.e. the Kronecker product of the matrix with itself. We choose n=3. In 𝒞_n the braidings c_X_1,- on the object X_1 have then the form: c_X_1,X_0 :X_1→ X_1, 1 c_X_1,X_1 :X_0⊕ X_2 → X_0⊕ X_2, (z_16^5,z_16^1) c_X_1,X_2 :X_1→ X_1, z_16^4 The braiding of X_1 in 𝒞_3^rev is just the inverse of the morphisms before, i.e. c'_X_1,X_0 :X_1→ X_1,  1 c'_X_1,X_1 :X_0⊕ X_2 → X_0⊕ X_2, (z_16^11,z_16^15) c'_X_1,X_2 :X_1→ X_1, z_16^12 We can visualize the 3^2=9 simple objects of 𝒵(𝒞_3) by arranging them into a grid: X_0⊠ X_0 X_0⊠ X_1 X_0⊠ X_2 X_1⊠ X_0 X_1⊠ X_1 X_1⊠ X_2 X_2⊠ X_0 X_2⊠ X_1 X_2⊠ X_2 ⇝ X_0 X_1 X_2 X_1 X_0⊕ X_2 X_1 X_2 X_1 X_0 . The left side shows objects in 𝒞_3⊠𝒞_3^rev, the right side depicts the corresponding object in 𝒵(𝒞_3). We see that X_1 occurs with 4 different braidings in 𝒵(𝒞_3), while X_0 and X_2 only with two. Furthermore, there is a simple object X_0⊕ X_2 in 𝒵(𝒞_3), which is obviously not simple in 𝒞_3. Note also that all objects in 𝒵(𝒞_3) are self-dual as they are self-dual in 𝒞_3 and hence also in the Deligne tensor product. The S-matrix of 𝒞_3 is of the form S_3 = [ [1] -[2] [3]; -[2] [4] -[6]; [3] -[6] [9]; ] = [ 1 -√(2) 1; -√(2) 0 √(2); 1 √(2) 1; ] The dihedral fusion datum by Lusztig, <cit.> is of the following form: For p≥ 3 we consider the pairs (i,j) with 0<i<j<i+j<p or 0=i<j<p/2, as well as two special tuples (0,p/2) and (0,p/2)' if p is even. We then define a pairing via ⟨ (i,j), (k,l) ⟩ξ^il+jk+ξ^-il-jk-ξ^ik+jl-ξ^-ik-jl/p on non-special tuples. Here ξ is a p-th root of unity. This expression looks similar to an expression in quantum numbers, the connection has been described in <cit.>. We set n p-1, then the tuples (i,j) correspond to the object X_j-i-1⊠ X_j+i-1 in 𝒞_n⊠𝒞_n^rev. Both special elements will correspond to two different subobjects of X_n-1/2⊠ X_n-1/2, see ex:i25. For any tuple of pairs ((i,j),(k,l)) the S-matrix value of the corresponding entry of (X_j-i-1⊠ X_j+i-1,X_k-l-1⊠ X_k+l-1) is then (-1)^j+k-i-l-2[(j-i)(k-l)][(j+i)(k+l)]. The quantum part of this expression then gives q^(j-i)(k-l)-q^-(j-i)(k-l)/q-q^-1q^(j+i)(k+l)-q^-(j+i)(k+l)/q-q^-1 =(q^kj-ik-lj+il-q^ik-kj+lj-il)(q^jk+jl+ik+il-q^-jk-jl-ik-il)/(q-q^-1)^2 =q^2kj+2il-q^-2ik-2jl-q^2ik+2lj+q^-2il-2jk/(q-q^-1)^2, where q is a 2(n+1)-th root of unity, i.e. q^2=ξ. Indeed, this gives the result of the pairing by Lusztig modulo a term of the form (q-q^-1)^2/p, which is exactly the square root of the categorical dimension as in rem:normalize. §.§ The center of Ad(𝒞_n) Here we describe the Drinfeld center of Ad(𝒞_n) as calculated by <cit.>. We put it together with results of <cit.> to compute its S-matrix and see how the normalized S-matrix is the same matrix Lusztig computed under in <cit.> under an involution, i.e. a permutation on the columns. There is a case distinction depending on the parity of n. §.§.§ The case of n even It was noted in <cit.> that the braiding of 𝒞_n restricted to the adjoint part Ad(𝒞_n) is still modular, i.e. the corresponding S-matrix is still invertible. In this case we can use lemma:center_of_braided again. We have 𝒵(Ad(𝒞_2n)) ≃Ad(𝒞_2n) ⊠Ad(𝒞_2n^rev). For n=4 the even part of 𝒞_n is the Fibonacci category F. We have two simples objects (X_0,X_2) with monoidal product X_2⊗ X_2≃ X_0⊕ X_2 and trivial associators except for the map X_2^2→ X_2^2, [ [1]/[3] -[2]^2/[4]; -[4]/[2]^2[3] [6]/[3][4] ] = [ φ^-1 -1-φ; -φ^-3 -φ^-1 ], where φ = 1+√(5)/2 and [n] are the quantum numbers with [2]=φ. Furthermore, the S-matrix is the restriction of the S-matrix of 𝒞_4, S_4, to the odd rows and columns: S_F=[ [1] [3]; [3] [9]; ]=[ 1 φ; φ -1 ]. Note, that [9]=-[1]=-1 and [2]=[3]=φ. This is invertible, as excepted by lemma:center_a_2n. The center 𝒵(Ad(𝒞_4)) = 𝒵(F) can be visualized as the black objects in the matrix X_i⊗ X_j, see eq:center_a_n X_0 · X_2 · · X_0⊕ X_2 · X_2 X_2 · X_0⊕ X_2 · · X_2 · X_0 This gives the S-matrix of 𝒵(ℱ) to be S_F⊠ S_F = φ[ φ^-1 1 1 φ; 1 -φ^-1 φ - 1; 1 φ -φ^-1 -1; φ -1 -1 φ^-1; ]. Here the ordering of objects is following the columns of eq:center_fibonnaci, i.e. X_0,X_2,X_2, and then X_0⊕ X_2. This matrix corresponds to Lusztig's result in <cit.> under reordering and normalizing by the square root of the dimension of 𝒵(𝒞) and applying an involution as seen in <cit.>. To be more precise we have dim(𝒵(𝒞))=(𝒞)^2, hence the normalization divides by the dimension of 𝒞. This is (𝒞)=(X_0)^2+(X_2)^2=1^2+φ^2=5+√(5)/2=√(5)φ. Under the ordering (X_0,X_0⊕ X_2,X_2,X_2) we then get 1/√(5)[ φ^-1 φ 1 1; φ φ^-1 -1 - 1; 1 -1 -φ^-1 φ; 1 -1 φ -φ^-1; ]. The final twist comes from the involution (-)^♭, which sends (i,j)↦ (i,p-j) if i≥ 0 and is trivial otherwise, see <cit.>. This interchanges both copies of X_2 (the ones coming from the pairs (1,2) and (1,3)) and leaves the other two elements invariant. Under the involution we therefore get exactly the matrix of <cit.>. This calculation works generally for any n=2m even, see the calculations in <cit.>. We have n^2 objects in 𝒵(𝒞_n) and hence m^2 in 𝒵(Ad(𝒞_n)). The values of the normalized S-matrix coincide with the calculations done in <cit.>. As one example we can look at the entry corresponding to the unit pair (X_0,X_0). In the S-matrix it is 1, while it is 1/(𝒞_n) in the normalized S-matrix. The value of the pairing ⟨(0,1),(0,1)⟩ is -(q-q^-1)^2/p, which are equal values. §.§.§ The case of odd n Now we consider the category Ad(𝒞_2n+1). Here the restriction of the S-matrix is not invertible anymore, for example in eq:S3 the odd rows and columns give [ 1 1; 1 1 ]. Therefore, one cannot use lemma:center_of_braided directly. There is an alternative way described in <cit.>. Let 𝒞 be a braided fusion category with braiding c. For any fusion subcategory 𝒟⊆𝒞 we write 𝒟' for the centralizer, i.e. the full fusion subcategory consisting of all objects (X,c)∈𝒞 such that c_X,Y∘ c_Y,X=𝕀_X⊗ Y for all (Y,c)∈𝒟. In this scenario 𝒞 is a 𝒟-bimodule category. We define the relative center 𝒵_𝒟(𝒞) as in <cit.>. If 𝒞 is a G-graded fusion category the trivial component 𝒟𝒞_0⊆𝒞 is a fusion subcategory. By <cit.> we have an isomorphism 𝒵_𝒟(𝒞)^G ≃𝒵(𝒞). With this we can recover 𝒵(𝒟) out of 𝒵(𝒞). The simples objects in 𝒵(𝒞) restricting to direct sums of the monoidal unit in 𝒞 under the forgetful functor 𝒵(𝒞)→𝒞 form a subcategory ℰ≃Rep(G)⊆𝒵(𝒞). We get an isomorphism (ℰ')_G ≃𝒵(𝒟), where (-)_G stands for the de-equivariantization. This is <cit.> using <cit.> We continue with ex:center_c_3. Here the categories 𝒞_n are G/2-graded, and 𝒟Ad(𝒞_n) is the even or adjoint part of the category. We have seen that the subcategory ℰ≃Rep(/2) is generated by two copies of X_0. The first, the monoidal unit, has trivial braidings. The second copies braidings are trivial on X_0 and X_2 but we have c_X_0,X_1: X_1→ X_1, (-1), for the braiding on X_1. We write (X_0,c̃) for this copy to distinguish it from the unit. From this we can compute the centralizer ℰ'. Note, that since the braiding of (X_0,c̃) on X_1 is non-trivial no copy of X_1 can lie in ℰ' as their braidings on X_0 are trivial. All other objects however lie in the centralizer, i.e. all black objects in X_0 X_1 X_2 X_1 X_0⊕ X_2 X_1 X_2 X_1 X_0 . Under the de-equivariantization both copies of X_0 and X_2 in the corners will be isomorphic, the object X_0⊕ X_2 however decomposes into two simple objects X_0 and X_2 not isomorphic to the others. In total we then get 4 simple objects in the center 𝒵(Ad(𝒞_n)). The restriction of the S-matrix of 𝒵(𝒞_n) to the objects X_0,X_2 and X_0⊕ X_2 has the form [ 1 1 2; 1 1 -2; 2 -2 0; ], the S-matrix of 𝒵(Ad(𝒞_n)) is of the form [ 1 1 1 1; 1 1 -1 -1; 1 -1 1 -1; 1 -1 -1 1; ] and we see that the sum of the third and fourth rows and columns is the same as before. The calculations regarding the centralizer of the Rep(/2) subcategory have been done by Lusztig in <cit.> and, in more detail including the S-matrix in <cit.>. We have n^2 objects in the center of 𝒞_n. For n odd, i.e. n=2m+1 we have 2m^2+2m+1 objects in ℰ. Under the de-equivariantization we get isomorphisms from the objects X_i⊠ X_j to X_m-i⊠ X_m-j. The object X_m⊠ X_m decomposes into a direct sum of two simples in 𝒵(Ad(𝒞_n)), hence we are left with m^2+m+2 simples objects, as it was conjectured. lusztig_conjecture holds for type I_2(n). This is the result of the observations in this sections, specifically lemma:center_a_2n gave a description of the center of the asymptotic Hecke category associated to the two-sided cell in type I_2(2n), and the last rem_lacabanne showed that the S-matrix of the center of Ad(𝒞_2n+1)≃ℋ^h is indeed the Fourier matrix by Lusztig. § THE TYPES H_3 AND H_4 We give an overview of the possible S-matrices occurring for Drinfeld centers of asymptotic Hecke algebras corresponding to J-cells in non-crystallo­graphic finite Coxeter groups. The missing two types H_3 and H_4 are discussed and complemented by the works of the previous sections. §.§ Type H_3 and H_4 All cells, their a-values and asymptotic Hecke algebras of the Coxeter groups H_3 and H_4 have already been computed. See for example <cit.> for data on H_4. It turns out that the diagonal H-cells occurring are nearly always rather small, having only one or two elements. In these cases we have mostly only one possible categorification, hence the corresponding S-matrices are easy to list. In a couple of cases the associator is not known and more calculations are needed. However, combinatorial results by Broué and Malle, see <cit.> tell us which categorification should be the right one assuming lusztig_conjecture is true. These observations have been made in <cit.> by Mackaay, Mazorchuk, Miemitz, Tubbenhauer and Zhang. The asymptotic Hecke category associated to an H-cell is denoted there by 𝒜_ℋ and called asymptotic bicategory. The construction can be found in <cit.>. We reuse their results on asymptotic Hecke categories in types H_3 and H_4 and augment their observations by possible S-matrices. Only in type H_4 there is one cell with a considerably bigger H-cell. The J-cell of a-value 6 contains diagonal H-cells of sizes 14, 18 and 24. A description of the asymptotic Hecke category associated to it is unknown, there is however a combinatorial result <cit.> about the exact S-matrix of its center, which contains 74 simple objects, assuming lusztig_conjecture is true in this case. Note, that this means that the centers of the three different categorifications ℋ_W^h of sizes 14,18 and 24 all need to be equivalent to a category with 74 simples. In type H_3 we have 7 J-cells with data c label (artificial) 1 2 3 4 5 6 7 c 1 18 25 32 25 18 1 a-value 0 1 2 3 5 6 15 size of diagonal H-cell 1 2 1 2 1 2 1 asymptotic Hecke category (A) (B) (A) (C) (A) (D) (A) in type H_4 there are 13 J-cells with data c label 1 2 3 4 5 6 7 8 9 10 11 12 13 c 1 32 162 512 625 1296 9144 1296 625 512 162 32 1 a-value 0 1 2 3 4 5 6 15 16 18 22 31 60 h 1 2 2 2 1 1 14,18,24 1 1 2 2 2 1 ℋ_W^h (A) (B) (D) (C) (A) (A) (E) (A) (A) (C) (D) (D) (A) with the following cases for the asymptotic Hecke category * There is only one element in h, hence any categorification has only one simple object, we therefore have ℋ_W^h≃Vec. * The fusion ring structure is that of the Fibonacci category, in the same way as in example_fibonacci. We have a categorification of ℋ_W^h through Ad(𝒞_4). * The fusion ring structure is that of K(Vec(/2)). This has two different categorifications, we have ℋ_W^h≃{Vec(/2),Vec^ω(/2)} for the non-trivial associator as in type E_8. * Here we have the same fusion ring as in case (B), however it is not clear which root of unity is in the categorification. Either we have [2]=1+√(5)/2 as in case (B) or [2]=1-√(5)/2. * This is the only case where no categories with the respective Grothendieck ring are known. If lusztig_conjecture holds we will have that for any such category ℋ_W^h the center has 74 simple objects. §.§ The S-matrices of centers of asymptotic Hecke categories of exotic cells We complete the overview of S-matrices of the centers of asymptotic Hecke categories associated to two-sided Kazhdan-Lusztig cells which was started in cor_smatrix_weyl and extended in cor_e78, using the results for type H above and for the dihedral group as in thm:dihedral. Let c be a two-sided cell in a Coxeter group of type I_2(n),H_3,H_4. The S-matrix of the center of the asymptotic Hecke category is of the following: * If c contains a diagonal H-cell of size 1, such as the cells {1}, {w_0} in type I_2(n) and the ones of case itema in type H, the asymptotic Hecke category is Vec and the S-matrix is S= [ 1 ]. * For the cases of itemb in types H_3 and H_4 we have the asymptotic Hecke category to be ℋ_W^h≃Ad(𝒞_4). The normalized S-matrix of its center is S_B=S_F=1/√(5)[ φ^-1 φ 1 1; φ φ^-1 -1 - 1; 1 -1 -φ^-1 φ; 1 -1 φ -φ^-1; ], where φ=1+√(5)/2, see example_fibonacci. * If c is middle J-cell in type W=I_2(n+1) for n≥ 2 the asymptotic Hecke category we have ℋ_W^c≃Ad(𝒞_n). For n even the S-matrix is S(𝒵(Ad(𝒞_n))) = ([(2i-1)(2j-1)] )_1≤ i,j ≤ n^⊗ 2, for [2]=q+q^-1 and q a 2(n+1)-th root of unity. The normalization factor S-matrix is (q-q^-1)^2/n+1. For n=4 this is exactly the result of the previous item. For n odd we have seen that in 𝒵(ℋ_I_2(n+1)^h)≃𝒵(Ad(𝒞_n)) one simple object of Ad(𝒞_n)⊠Ad(𝒞_n) splits in the center, and the S-matrix therefore includes the matrix (<ref>) as well as two new rows and columns, which entries can be computed by the pairing of Lusztig, see (<ref>) in ex:i25 and (<ref>). For I_2(4) this gives for example S(𝒵(Vec(/2))) and for I_2(6) we get S(𝒵(Vec(S_3)) see cor_smatrix_weyl. * In the case itemd in type H we had two possible categorifications. The S-matrix in the the second options returns nearly S_B, we only need to replace [2]=φ by φ^-1=1-√(5)/2. We call this modified S-matrix S_F'. The Fourier matrix in all of these cases as seen in <cit.> however is S_B. If lusztig_conjecture is true we therefore expect to never see the second option. * In the case itemc in type H we again had two possible categorifications, namely the category of /2-graded vector spaces, either with trivial or non-trivial twist. The possible normalized S-matrices are S_C∈{1/2[ 1 1 1 1; 1 1 -1 -1; 1 -1 1 -1; 1 -1 -1 1; ],1/2[ 1 1 1 1; 1 1 -1 -1; 1 -1 -1 1; 1 -1 1 -1; ]}. However, if lusztig_conjecture holds the Fourier matrix as computed by <cit.> is the second option, i.e. the same as in the exceptional cases of type E_7 and E_8, see cor_e78. * And finally we expect the S-matrix for the cell of a-value 6 in type H_4 to be the Fourier matrix computed by <cit.>. § EXAMPLES IN INFINITE COXETER GROUPS So far we have only considered finite Coxeter groups. In these cases it is clear that all two-sided cells themselves are also finite. However, one might also investigate finite two-sided cells lying in infinite Coxeter groups. There are conjectured results on the structure of cells in infinite Coxeter groups, see <cit.>, however as far as the authors are aware there are no classification results on finite two-sided or H-cells. We will focus this subsection on two known classifications and extend the description of S-matrices of asymptotic Hecke algebras to all finite two-sided cells of a-value lower or equal than 2. The cell of a-value 0 is always finite as it only contains the neutral element. The asymptotic Hecke category in this case is the category of finite-dimensional vector spaces Vec, the S-matrix of its center is (1). §.§ The case of a(1)-finite Coxeter groups Let W always denote a Coxeter group with generating set S. We write W_i{x∈ W| a(x)=i} for the subsets of elements of a given a-value. We say that W is a(i)-finite if W_i is finite. Hart gave a characterization of a(1)-finite Coxeter groups in <cit.>. The set W_1 has always an easy description: In any irreducible Coxeter group W the unique two-sided cell of a-value 1, W_1, is characterized by consisting of elements of W which have a unique reduced expression. This can be seen for example in <cit.>. Furthermore, the left and right cells inside it are partitioned by the right and left descending set of the element, i.e. x∼_L y for a(x)=a(y)=1 if and only if the unique reduced expression of both elements ends in the same reflection. Let W be an irreducible Coxeter group with generating set S. The set W_1 of elements of a-value 1 is finite if and only if all of the following conditions hold: * The set S is finite * The Dynkin diagram of W is a tree * There is no relation m_s,t=∞ and at most one relation m_s,t>3 for s,t∈ S. Remember that the set W_1 is characterized by the set of words of W which have a unique reduced expression, see rem_cella1. The idea is the following: In a unique expression (s_1,s_2,…,s_n)∈ W there cannot be an i such that s_i and s_i+1 commute, hence we can interpret any word as a path inside the Dynkin diagram of W. The question is therefore to decide when a path represents a reduced expression and when there are only finitely many of them. A path represents a reduced expression if and only if there is no subsequence (s,t,s…,s) of length m_s,t. Hence, if S is infinite we can construct infinitely many reduced expressions, similarly if there is a cycle in the Dynkin diagram or if we have m_s,t=∞ for some s,t. Now assume that there are two tuples (s,t) and (u,v) with m_s,t,m_u,v>3 and let p be a path connection both tuples. Without loss of generality, we have p=(t=r_0,r_1,…,r_n=u). Let p^-1 denote the reverse path. Then the composition (p,v,p^-1,s) represents a reduced expression and any power of it does too, hence we also have W_1 to be infinite. If now all our assumptions are satisfied we show that there is a finite number of paths giving a unique reduced expression. Let m_s,t be the biggest relation occurring. Any reduced expression of a path starting in an r∈ S of length more than S includes one element u∈ S at least twice. As the Dynkin diagram contains no circles we therefore can find a subsequence of the form (…,u,v,u,…). This implies that (u,v) is the edge (s,t) with m_s,t>3. Any path corresponding to a reduced expression can now repeat u,v a maximum of m_s,t-1 times. Once the path leaves the edge it cannot come back, hence the length of a reduced expression is bounded and the size of W_1 is therefore bounded as well. Let W be a Coxeter group with generating set M and let K,L⊂ M be disjoint subsets of M. We denote the Coxeter groups generated by K and L by U and V, then U× V ⊂ W is a subgroup of W. For two-sided cells c_1⊂ U and c_2⊂ V of a-value i and j the Cartesian product c c_1× c_2⊂ W is a two-sided cell of a-value i+j inside W. The asymptotic Hecke algebra J_c is also isomorphic to J_c_1× J_c_2. This follows quickly from the observation that for x∈ c_1 and y∈ c_2 the Kazhdan–Lusztig basis elements commute, i.e. we have b_xb_y=b_yb_x and therefore b_(x,y)=b_xb_y. The cell and a-value computations now work independently in both summands. The conclusion of lem_a(1)-finite still holds when W is not to be assumed irreducible. The assumptions (2) and (3) then need to hold for any connected component of the Dynkin diagram. We have seen in lem:addition_a that the a-value of a cell c× d for c and d lying in different Coxeter groups is the sum of their a-values. Let now S=∐ S_i be a disjoint union where each S_i represents a connected component of the Dynkin diagram. Then any cell c⊂ W(S) of a-value 1 has the form {1}×{1}×…× c_i ×…×{1}, for c_i being a cell of a-value 1 lying in W(S_i). We can now apply lem_a(1)-finite. Let W be an a(1)-finite irreducible Coxeter group and let c⊂ W be a two-sided cell. Let m be the value of the biggest relation occurring in the Dynkin diagram. For any tuple (r,s) of reflections in W there is a unique H-cell h_r,s where all words start in r and end in s. The size is ⌊m/2⌋ if the shortest path connecting r and s includes the edge m and ⌊m-1/2⌋ if not. This follows from the proof of lem_a(1)-finite by counting the number of paths corresponding to reduced expression. We need the characterization of left and right cells inside W_1 by starting and ending letter as seen in rem_cella1. An enumeration of W_1 can also be found in <cit.>. Let W be an a(1)-finite Coxeter group and let c be a two-sided cell of W. Then one can choose an H-cell h⊂ c such that the asymptotic Hecke category ℋ_W^h is equivalent to Ad(𝒞_n) for some n, i.e. the center and the S-matrix are the same as in the dihedral case as seen in thm:overview. Following cor:classification_a1finite we can assume that W is irreducible. Let s,t be generators of W such that m_s,t is maximal (i.e. we take the unique tuple (s,t) such that m_s,t is greater than 3 if it exist). We now choose the H-cell starting and ending in s, h h_s,s. This cell is then the same as the H-cell of the subgroup generated only by s and t, a dihedral group of order 2m_s,t. All computations of ℋ_W^h therefore reduce to the finite dihedral case. One such Coxeter group has appeared in <cit.>. The Coxeter group is of type W_237 with generators ⟨ r,s,t | r^2=s^2=t^2=(rs)^3=(st)^7=(rt)^2=1⟩. Following cor:enumeration_a(1)-finite we can enumerate all elements of a-value 1 by looking for paths corresponding to reduced expressions. We order these elements by starting and ending letter, i.e. we partition them into left and right cells: On the diagonal H-cell coming from the dihedral subgroup of type I_2(7) the multiplication on the asymptotic Hecke algebra can be read of directly. We have for example j_sts^2=j_s+j_sts+j_ststs. Similarl<, one can work out the complete multifusion ring structure and get for example j_srj_rststsr=j_stststr. The center of the asymptotic Hecke category has 14 simple objects. §.§ The case of a(2)-finite Coxeter groups Recent results by Green and Xu classified all irreducible Coxeter groups which are a(2)-finite. Coxeter groups where the Dynkin diagram contains a cycle have either none or infinitely many elements of a-value 2. For all other cases they further always described one H-cell lying in W_2. We list their results and show that the S-matrix of the asymptotic Hecke category is the same as in the dihedral case of thm:overview, by choosing an appropriate H-cell in which the asymptotic Hecke algebra is isomorphic to the Grothendieck ring of Ad(𝒞_n). <cit.> and <cit.> An irreducible Coxeter group W with elements of a-value 2 is a(2)-finite if and only if it is of one of the following types: A_n, B_n, C̃_n, E_q,r, F_n, H_n, I_n, where C̃_n=[extended,Coxeter,labels=1,2,n-1,n, scale=1.8] Co...oo, for n≥ 3, E_q,r=[ labels=-q,v,-(q-1),0,(r-1),r, scale=1.8] Eooo...o...oo, for r≥ q≥ 1, F_n=[Coxeter, labels=1,2,3,n, scale=1.8] Fooo...o, for n≥ 4, H_n=[Coxeter, labels=1,2,3,n, scale=1.8] Hooo...o, for n≥ 3. In the case E_q,r where r=q=1 (i.e. D_4) the set W_2 consists of three two-sided cells, if r>q=1 (i.e. type D_r+3) we have two cells in E_q,r. In all other cases W_2 itself is a two-sided cell. One representative of an H-cell is given by the following: * Type A_n: h={13} * Type B_3: h={13} * Type B_n for n>3: h={24,2124} * Type C̃_n-1 where n≥ 5: h={24,2124,2z,212z}, where z=45… (n-1)n(n-1)… 54 * Type E_q,r where r≥ q≥ 2: h={1v} * Type F_4: h={24} * Type F_n, where n>4: h={24,243524} * Type H_3: h={13} * Type H_n, where n>3: h={24,2124} Let W be an irreducible a(2)-finite and infinite Coxeter group. The center of the asymptotic Hecke category associated to c=W_2 the two-sided cell of a-value 2 is equivalent to one of the following cases: Vec,  𝒵(F), 𝒵(F'),  𝒵(F)⊠𝒵(F), for F the Fibonacci category Rep(𝔰𝔬(3)_3) as in example_fibonacci and F' the Fibonacci category with the second choice of associator as seen in thm:overview. The possible S-matrices are: (1), S_F⊠ S_F, S_F'⊠ S_F', (S_F⊠ S_F)⊠ (S_F⊠ S_F). Of the classification in prop:a2-finite we are only concerned with the infinite cases C̃_n,E_q,r,F_m,H_l for q≥ 2 and r+q≥ 7, m>4,l>5. If the H-cell given has size 1 then the only possible categorification of the asymptotic Hecke algebra is Vec. Therefore, we only need to check the other 3 remaining cases. In all of them we find that the H-cell lies in a finite parabolic subgroup in which it is also an H-cell of a-value 2. In this parabolic subgroup h can be written as h_1× h_2, where both are of a-value 1 inside the respective Coxeter group, see lem:addition_a. One can therefore deduce the structure of the asymptotic Hecke algebra from finite cases. * Case F_m for m>4: The H-cell h={24,243524} lies in the parabolic subgroup B_4 where we identify the generators i of B_4 by i+1 inside F_4. The asymptotic Hecke algebra in this case is the Fibonacci ring, where j_24 is the identity and j_243524^2=j_24+j_243524. Therefore, rem:ostriksresult holds and we categorify over a type A_n category as in the dihedral case. * Case H_n for n>3: The H-cell h={24,2124} lies in the finite parabolic subgroup I_2(5)× A_1, where we identify the generators 1 and 2 with those of I_2(5) and 4 with the one of A_1. This case therefore also reduces to the observations of thm:overview. * Case C̃_n: The H-cell h={24,2124,2z,212z} lies in the parabolic subgroup I_2(4)× B_n-3. It is the product of their unique cells of a-value 1, namely {2,212} in I_2(4) and {4,z} in B_4. Both asymptotic Hecke categories associated to them are again the Fibonacci category. The asymptotic Hecke category associated to h is therefore the product K(F)× K(F). It could be that there are different categories categorifying this ring, however by a classification result on small rank fusion categories <cit.> only the Deligne tensor product of the Fibonacci category with itself lies over the given fusion ring. The asymptotic Hecke category ℋ_W^h is therefore equivalent to F⊠ F, the center is 𝒵(F⊠ F)≃𝒵(F)⊠𝒵(F) and the S-matrix is given by the Kronecker product as stated.
http://arxiv.org/abs/2307.07469v1
20230714165125
Interactive Spatiotemporal Token Attention Network for Skeleton-based General Interactive Action Recognition
[ "Yuhang Wen", "Zixuan Tang", "Yunsheng Pang", "Beichen Ding", "Mengyuan Liu" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.RO" ]
Spectral Network Principle for Frequency Synchronization in Repulsive Laser Networks Mostafa Honari-Latifpour,^1,2 Jiajie Ding,^1,2 Igor Belykh,^3 and Mohammad-Ali Miri^1,2,* August 12, 2023 ============================================================================================= empty empty Recognizing interactive action plays an important role in human-robot interaction and collaboration. Previous methods use late fusion and co-attention mechanism to capture interactive relations, which have limited learning capability or inefficiency to adapt to more interacting entities. With assumption that priors of each entity are already known, they also lack evaluations on a more general setting addressing the diversity of subjects. To address these problems, we propose an Interactive Spatiotemporal Token Attention Network (ISTA-Net), which simultaneously model spatial, temporal, and interactive relations. Specifically, our network contains a tokenizer to partition Interactive Spatiotemporal Tokens (ISTs), which is a unified way to represent motions of multiple diverse entities. By extending the entity dimension, ISTs provide better interactive representations. To jointly learn along three dimensions in ISTs, multi-head self-attention blocks integrated with 3D convolutions are designed to capture inter-token correlations. When modeling correlations, a strict entity ordering is usually irrelevant for recognizing interactive actions. To this end, Entity Rearrangement is proposed to eliminate the orderliness in ISTs for interchangeable entities. Extensive experiments on four datasets verify the effectiveness of ISTA-Net by outperforming state-of-the-art methods. Our code is publicly available at <https://github.com/Necolizer/ISTA-Net>. § INTRODUCTION Interactive action recognition is a crucial yet challenging task in computer vision and physical human-robot interaction<cit.>, with a wide range of applications like assistive household robots<cit.> and interactive mechanical arms<cit.>. These smart assistants should understand interactive motion patterns and the intents behind actions to ensure safe and reliable human-robot collaboration<cit.>. An interactive action is a purposeful behavior that involves the interdependent physical dynamics of multiple entities. The indivisibility of interdependent entities distinguish interactive actions from individual actions and group activities. Individual actions (Fig. <ref> (a)) are concerned with motions of a single subject. Group activities (Fig. <ref> (b)) are events concluded or abstracted from common goals of actions, which contain considerable irrelevant and noisy individual actions. In contrast, each subject in an interactive action is indispensable to illustrate the full semantic meaning. Types of interactive actions includes person-to-person, hand-to-hand, hand-to-object. Diverse interacting entities have distinct physical structures and interaction patterns, leading to the complexity and variability when modeling interactions. Studies on modeling interactive actions have emerged in recent years<cit.>, but they study only one specific type of interactions depicted in Fig. <ref> (c). They also assume that prior knowledge of the physical connections within each interacting subject is already known and remains fixed. Therefore, these methods lack evaluations in a general setting addressing the diversity of interacting entities. In this paper, we focus on a general interactive action recognition task, which is a generalization from the subject-type-specific ones, as shown in Fig. <ref> (d). Moreover, previous designs have limitations when capturing interactive relations. Late fusion offers a simplistic approach for modeling interactive relations, but it lacks the capacity to handle complex interactions. On the other hand, expanding the co-attention architecture to accommodate more than two interacting entities is inefficient, due to the increase in the number of calculations required for pair-wise co-attention scores, as the number of entities increases. Therefore, an important question arises: How to jointly learn the spatial, temporal, and interactive relations of diverse interacting subjects? To answer this question, we simultaneously model entity, temporal and spatial relations between interacting entities with an Interactive Spatiotemporal Token Attention Network (ISTA-Net), whose core component is the Interactive Spatiotemporal Tokenization. 3D Interactive Spatiotemporal Tokens (ISTs) can be generated by this tokenization, which is a unified way to represent motions of multiple diverse entities. To learn inter-token correlations, we integrate 3D convolutions with self-attention and design Token Self-Attention (TSA) Blocks. Moreover, the ordering of unordered entities in ISTs is unnecessary for modeling correlations. We propose Entity Rearrangement (ER) to solve this problem. The main contributions of this paper are as follows: * We propose an Interactive Spatiotemporal Token Attention Network to solve the general interactive action recognition task, which does not require prior knowledge on subject's physical structure. * Specifically, we present Interactive Spatiotemporal Tokens to fuse three dimensional interactive spatiotemporal features, for effectively representing spatiotemporal interactions for diverse entities. We present Token Self-Attention Blocks for better capturing correlations of different interactive features. Moreover, Entity rearrangement is proposed to ensure inherent permutation invariance for unordered entities in ISTs. * Extensive experiments on NTU RGB+D 120, SBU-Kinect-Interaction, H2O and Assembly101 datasets consistently verify the effectiveness of our method, which outperforms most interactive action recognition methods. Our code is publicly available. § RELATED WORK §.§ Action Recognition Most skeleton-based action recognition methods focus on developing effective architectures to recognize individual actions. Early approaches<cit.> adopted RNN or LSTM to model long-term context of skeleton sequences. Then many models based on Graph Convolution Network (GCN) were proposed<cit.>. To facilitate modeling channel-wise topologies, CTR-GCN <cit.> learns a shared topology for all channels and refines it for each channel. InfoGCN<cit.> adopts a novel learning objective to learn compact latent representations. Recent works explored the protential to introduce self-attention mechanism into skeleton spatiotemporal modeling<cit.>. For instance, STSA-Net<cit.> adopted a spatiotemporal segments encoding strategy to fuse joint relations between frames. §.§ Interactive Action Recognition Recently-proposed interactive action recognition models <cit.> capture interactions based on specially-designed modules according to subject priors. TA-GCN <cit.> models the hand-to-object relationship with a topology-aware graph convolutional network, in which prior graph dependencies of hands are predefined. For person-to-person mutual actions, LSTM-IRN <cit.> adopts relational reasoning over the different relationships between the human joints during interactions. IGFormer <cit.> is the first to adopt Transformer-based architecture and leverages prior knowledge on human body structure to design co-attention mechanism for interactions. Different from above methods, our method utilizes Interactive Spatiotemporal Tokens as early fusions for modeling interactive spatiotemporal features, which also allows ISTA-Net to be able to handle various interactions, such as person-to-person, hand-to-hand, and hand-to-object interactions, with no need to manually predefine adjacency based on subject-type-specific prior knowledge. § ISTA-NET Architecture of our proposed Interactive Spatiotemporal Token Attention Network is presented in Fig. <ref>. The input is an interactive action, which can be constituted by different types of entity. Firstly, ISTA-Net performs Entity Rearrangement in training to maintain the equivalence of unordered subjects. Subsequently the skeleton tensor gets tokenized by a 3D sliding window. Then the interactive Spatiotemporal Tokens are fed to L Token Self-Attention Blocks to learn token-level interdependency. Prediction is finally made through Global Average Pooling (GAP) along ISTs following with a fully connected (FC) layer. §.§ Interactive Spatiotemporal Tokenization for Interactive Skeleton Sequences An important aspect of ISTA-Net is the design of attention tokens that represent interactive spatiotemporal local features for interactive skeleton sequences. We propose a general solution to represent motion of multiple skeletons including diverse subjects, without the assumption that priors of each interacting entity are already known. Suppose that there are E interactive entities performing an interaction over a period of time T, and each entity contains J joints. Depending on whether 2D or 3D skeletons are estimated, the coordinate dimension C can be 2 or 3. Thereby the input skeleton sequence is defined as X_input∈ℝ^C× T× J × E. In comparison to individual actions, interactive actions have an additional dimension E representing interactive entity parts or joints, which must be taken into consideration when tokenizing the skeletal data. Our solution is to use non-overlapping 3D windows to obtain Interactive Spatiotemporal Tokens. This step is called Interactive Spatiotemporal Tokenization (IST) Block. Given a window W of size T_w× J_w× E_w, it slides along temporal, spatial and interactive dimensions, partitioning the input data in a non-overlapping manner. Therefore, the input of size C× T× J × E is divided into U = ⌈ T/T_w⌉×⌈ J/J_w⌉×⌈ E/E_w⌉ patches of size C× T_w× J_w× E_w in total, which is illustrated as follows: X_w = IST(X_input, W), where W ∈ℝ^T_w× J_w× E_w and X_w∈ℝ^C× T_w× (J_w× E_w)× U. The tokens X_w can be viewed in ℝ^(C× T_w× J_w× E_w)× U, which could be illustrated more clearly as the standard Transformer input format. However, in this case, we retain the coordinate dimension C and temporal dimension T_w for downsampling and temporal aggregation in later stages. In some cases, such as in the T channel, the input size T may not be evenly divisible by the window size T_w. In such cases, parts of the original tensor should be replicated and padded along the T dimension to create a new tensor of size T' in time channel, where T_w is an aliquot part of T'. To enrich the representation in coordinates, a 3D 1× 1 × 1 convolution is employed to extend the coordinate dimension from C to C', which could be formulated as X_w' = Conv3D_(1× 1 × 1)(X_w), where X_w' ∈ℝ^C'× T_w× (J_w× E_w)× U. The 3D convolution operation, followed by the batch normalization and an activation function, serves as the embedding layer for interactive spatiotemporal tokens. Finally these tokens X_ist are fed to several Multi-head Self-attention Blocks to learn high-level cross frame, joint and subject representations. §.§ Entity Rearrangement When partitioning ISTs as well as encoding positional information, the presence of a strict entity ordering can impede learning's ability to generalize to more cases. Specifically, for interactive entities engaged in mutual actions, some are semantically ordered and not interchangeable (e.g. left hand, right hand and object), while others are unordered and interchangeable (e.g. persons). The semantic equivalence of mutual subjects implies that the unordered entities are permutation-invariant. They can be arranged in any order while still representing the same interactive action. This observation inspires us a simple yet effective way to eliminate the orderliness of interchangeable entities. Given the input skeleton sequence of size C× T× J × E, we first divide it into E parts along interactive dimension, obviously each of which represents the joint motion of one subject: [X_1, X_2,⋯, X_i,⋯, X_E] = Split(X_input), where [1, 2,⋯, i,⋯, E] are indexes of the positional order along interactive dimension. We could rearrange the original X_input as follows: X̃_input = Concat([X_v_1, X_v_2,⋯, X_v_i,⋯, X_v_E]), where [v_1, v_2,⋯, v_i,⋯, v_E] is an arbitrary arrangement of indexes [1, 2,⋯, i,⋯, E]. The complete process of our proposed Interactive Spatiotemporal Tokenization with Entity Rearrangement is illustrated in Algorithm <ref>. Line 1-6 refer to ER. Line 7-11 refer to tensor padding. Line 12-15 refer to tokenziation using 3D windows. Line 16-17 represent embedding layers. During each training epoch, an input permutation X̃_input is selected, while in validation and testing, the original input X_input is used. The total number of possible permutations for entities is E!, indicating that each permutation has a probability of 1/E! to be chosen as input. A theoretical concern is that the factorial increase in the number of samples may lead to non-convergent training. However, in practice, E is typically small, since in most cases, there are not many mutual subjects in a single interactive action. §.§ Token Self-Attention Blocks To model the spatial, temporal, and interactive relationships simultaneously, our architecture incorporates a multi-head self-attention mechanism instead of a graph-convolution-based design. Unlike many GCNs, which require manual definition of an adjacency list for every joint based on prior knowledge of the physical connections between joints, our proposed architecture omits this tedious step for diverse action subjects. This also provides a unified approach to recognize interactive actions of diverse subjects . Our proposed ISTA-Net consists of L Token Self-Attention Blocks. Similar to standard multi-head self-attention, the input X_L_i-1 transforms to multiple sets of query Q, key K and value V as follows: Q = Conv3D_(1× 1 × 1)(X_L_i-1+PE(X_L_i-1)), K = Conv3D_(1× 1 × 1)(X_L_i-1+PE(X_L_i-1)), V = X_L_i-1, where positional encoding implemented with circular functions is denoted as PE(·). The number of sets, namely heads, is denoted as H. Self-attention scores X^h_L_i of the h-th head could be calculated as the following formula: X^h_L_i = (αtanh(QK^T/√(C_β)) + M)V, where QK^T is divided by the square root of the feature length C_β=T_w× J_w× E_w× C_L_i-qkv. A trainable regularized matrix M ∈ℝ^U× U is added to the normalized attention map with a trainable balanced factor α, which can benefit correlation learning<cit.>. All scores X^h_L_i of H heads are concatenated to get X^H_L_i. In some TSA Blocks, the C_L_i-1 dimension is doubled to downsample the features (C_L_i=2× C_L_i-1), while in the others it remains the same (C_L_i=C_L_i-1): X̂_L_i = Conv3D_(1× 1 × k_u)(X^H_L_i), X́_L_i = Conv3D_(1× 1 × 1)(X̂_L_i+X^Res_L_i) + X^Res_L_i, where a 3D 1× 1 × 1 convolution with residual connections implements the feed forward network (FFN). The last component is the Temporal Aggregation (TA) layer. Previous researches<cit.> indicate that feature aggregation along temporal channel is effective for modeling actions. In contrast to those methods, the proposed ISTA-Net uses 3D convolution with kernel sizes larger than 1 in the temporal dimension (k_t>1) to aggregate sequence features: X_L_i = Conv3D_(k_t× 1 × 1)(X́_L_i) + X́^Res_L_i, which is followed by a residual connection X́^Res_L_i. § EXPERIMENTS §.§ Datasets NTU RGB+D 120<cit.>, the extension version of NTU RGB+D<cit.>, is a widely-used action recognition dataset. It provides 114,480 samples of 120 human actions. In our experiments we focus on a subset of NTU RGB+D 120 Dataset, which consists of 26 kinds of mutual actions (named NTU Mutual, for short). SBU-Kinect-Interaction<cit.> is a human activity dataset that depicts person-to-person interactions. It includes eight interactions, with RGB+D videos and extracted skeletons. H2O<cit.> is the first dataset constructured for egocentric 3D interaction recognition. With 3D pose of both hands and pose of manipulated objects, H2O dataset facilitates hand-to-hand and hand-to-object interactions understanding. Assembly101<cit.> is a large procedural activity dataset. 3D hand poses are provided to advance 3D interaction recognition from egocentric views. It's a tough task due to the dataset's complexity, which includes over 1,300 fine-grained classes of hand-to-object interactions. Each class consists of a single verb and an object that is manipulated. Additionally, the absence of object poses adds another layer of difficulty to judging the interactive actions. Statistics and difficulties of these datasets are summarized in Table <ref> and Fig. <ref>. For evaluation on NTU Mutual, we employ the Cross-subject (X-Sub) and Cross-set (X-Set) criteria <cit.>, using only the joint modality to ensure fair comparisons without fusion. For SBU, the suggested 5-fold cross validation approach <cit.> is adopted. For H2O and Assembly101, we follow the training, validation, and test splits described in <cit.> and <cit.>, respectively. §.§ Implementation Details All of our experiments are conducted on a machine equipped with four GeForce RTX 3070 GPUs and CUDA version 11.4. For training on NTU Mutual dataset, SGD optimizer is used with Nesterov momentum of 0.9, a initial learning rate of 0.1 and a decay rate 0.1. Window size is set to [20, 1, 2]. Cross entropy is used as loss function with label smoothing factor 0.1 and temperature factor 1.0. Batch size is 32. Each training process was terminated after 110 epochs. Parameters for the other datasets might be different. Please refer to the configurations in our Github repository. §.§ Comparison with Related Methods Table <ref> reports the experimental results on NTU Mutual, SBU, H2O and Assembly101 datasets. The proposed ISTA-Net achieves state-of-the-art performance compared with other traditional action recognition and interactive action recognition methods. Benefitting from the proposed ISTs, TSA Blocks and ER, ISTA-Net outperforms many LSTM-, GCN-, and Transformer-based action recognition methods. ISTA-Net achieves 5.16%, 5.22%, 0.11% and 5.68% gains over the most related interactive action recognition method, IGFormer<cit.>, on NTU Mutual X-Sub, X-Set, SBU and Assembly101. ISTA-Net also outperforms InfoGCN<cit.> by 0.34% and 0.59% on NTU Mutual, TA-GCN<cit.> by 9.84% on H2O, and MS-G3D<cit.> by 1.15% on Assembly101. Observed from the results, our ISTA-Net also show its superiority and adaptability to diverse interactive entities. Fig. <ref> visualizes the learnt attention in the last TSA Block, which verifies the effectiveness of ISTA-Net when modeling interactive actions. §.§ Ablation Study Comparison of Ways to Fuse Interactive Relations. We compare four approaches to model the interactive relations of spatiotemporal features. The first approach, called Late Fusion, is widely used in traditional action recognition methods when adapting to interactive skeletons. In Late Fusion, interactions are only modeled in the classification head. The second one, Co-attention, employs weight-shared dual-branch self-attention blocks. In each block, K and V are got from the previous block in this branch, while Q is obtain from the other branch. The third approach, Coordinate Concat, directly concatenates entity features along coordinate dimension. The last one is our proposed IST, which fuses interactive features during early tokenization. Compared to the others, an additional dimension E is extended in this method. Table <ref> demonstrates that IST outperforms the other approaches by 1.77%, 0.78% and 2.42%. Effectiveness of Entity Rearrangement. We explore the effectiveness of Entity Rearrangement by removing this step. As reported in Table <ref>, the performance declined on the relatively larger NTU Mutual dataset, and more significantly on the relatively smaller SBU dataset. This indicates that ER is beneficial for enhancing model generalization, particularly when training with small-scale data. Effectiveness of Temporal Aggregation. To confirm the contributions made by Temporal Aggregation, we removed this step for comparison purposes. The results in Table <ref> indicate that TA can effectively aggregate local temporal motion features in ISTs and improve recognition performance. Comparisons of Different Input Frame Lengths and Window Sizes. We evaluate the influence of various input frame lengths and window sizes on the performance of ISTA-Net. On NTU dataset, 60 and 120 are the two most widely-adopted input frame lengths. To ensure fair comparisons, when taking different numbers of frames, window size is scaled accordingly in temporal dimension, thus keeping the number of ISTs unchanged. The results presented in Table <ref> suggest that using 120 frames as input achieves the best performance, and adding more frames introduces additional noise. Table <ref> shows that, given a fixed number of frames, a window size of [20,1,2] leads to the optimal result, indicating joints can be modeled better at a fine-grained level. § CONCLUSIONS This paper proposes Interactive Spatiotemporal Token Attention Network for general interactive action recognition, which does not require subject-type-specific graph prior knowledge to model diverse interacting entities. Our ISTA-Net consists of Interactive Spatiotemporal Tokenization Block and Token Self-Attention Blocks. By extending an additional entity dimension in attention tokens, our design can simultaneously and also effectively capture interactive and spatiotemporal correlations of interactive actions. Moreover, we introduce Entity Rearrangement to preserve the disorderliness of unordered subjects in Interactive Spatiotemporal Tokens. Our approach shows superior performance and adaptability on four benchmarks of interactive action recognition. § ACKNOWLEDGEMENT This work was supported by the National Natural Science Foundation of China (Grant No. 62203476, No. 52105079). -1.5cm ./IEEEtran
http://arxiv.org/abs/2307.05556v1
20230709154956
A multitype Fiksel interaction model for tumour immune microenvironments
[ "Jonatan A. González", "Paula Moraga" ]
stat.AP
[ "stat.AP" ]
pics/pics/ #1 1 1 A multitype Fiksel interaction model for tumour immune microenvironments Jonatan A. González Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia and Paula Moraga Computer, Electrical and Mathematical Science and Engineering Division, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================= 0 A multitype Fiksel interaction model for tumour immune microenvironments The tumour microenvironment plays a fundamental role in understanding the development and progression of cancer. This paper proposes a novel spatial point process model that accounts for inhomogeneity and interaction to flexibly model a complex database of cells in the tumour immune microenvironments of a cohort of patients with non-small-cell lung cancer whose samples have been processed using digital pathology techniques. Specifically, an inhomogeneous multitype Gibbs point process model with an associated Fiksel-type interaction function is proposed. Estimation and inference procedures are conducted through maximum pseudolikelihood, considering replicated multitype point patterns. Keywords: Digital pathology; Gibbs models; Non-small cell lung cancer, Point process models; Pseudolikelihood; Replicated point patterns. 1.9 § INTRODUCTION Tumours are complex ecosystems that consist of much more than a collection of cancer cells. For example, they contain epithelial cells, fibroblasts, blood and lymphatic vessels, and infiltrating hematopoietic cells, among others <cit.>. These structural elements affect the growth and clinical conditions of the tumour. The ecosystem surrounding a tumour within the body is usually known as the tumour microenvironment. It is a set of infiltrating and resident host cells, secreted factors, and extracellular matrix <cit.>. Tumour cells stimulate essential molecular, cellular, and physical changes within host tissues to support tumour growth and progression. The composition of the tumour microenvironment varies between tumour types, but distinctive features include immune and stromal cells, among others. The tumour microenvironment characterises the tumour and its environment and plays a fundamental role in understanding the development and progression of cancer. One of the key components of the tumour microenvironment is the tumour immune microenvironment, which has a highly diverse composition, including various populations of T-cells, B-cells, dendritic cells, natural killer cells, myeloid-derived suppressor cells, neutrophils, or macrophages <cit.>. Recent advances in imaging techniques allow scientists to study the spatial structure of the tumour microenvironment or tumour immune microenvironment at a level of detail down to a single cell <cit.>. This data (several antibody markers) has complex acquisition and processing. Processing them involves various efforts in the laboratory and applying various numerical and statistical methods to reduce non-biological variability, i.e., the variability due to the computational procedures to process the data <cit.>. Some research teams have developed suitable methods and software to address these challenges. In this context, image normalisation is a technique that adjusts an image's input pixel- or image-level values to remove noise and improve image quality. Some statistical tools for normalisation improve the similarity across images by removing the unknown effect of technical variability. To normalise multiplex image data, <cit.> implement and compare data transformations and normalisation algorithms in multiplexed imaging data providing a foundation for better data quality and evaluation criteria in multiplexed imaging. <cit.> propose a density-based method for distinguishing the difference between the subjects concerning the distribution of a functional marker in the tumour microenvironment or tumour immune microenvironment. <cit.> examine how spatial interactions among different immune cells in the ovarian cancer tumour microenvironment are associated with overall survival using scalar spatial summaries. Currently, spatial statistical techniques are preferred when analysing this type of data <cit.>. The locations of the immune cells in the tumour immune microenvironment can be assumed as a spatial point pattern in a predefined observation window, usually given by the limits in which the image of the biological sample was processed. The most straightforward point process is completely spatial random (CSR or stationary Poisson process), where the expected value of the number of immune cells is assumed to be constant throughout the region of interest and where the cells do not interact with each other <cit.>. This model, however, is unrealistic in practice since cells easily violate both assumptions <cit.>, accumulate in certain preferred regions of the tissue (inhomogeneity), repel or attract each other, or even attract each other on one scale and repel each other on another scale (interaction). Therefore, we can study the tumour microenvironment or tumour immune microenvironment from the spatial point processes point of view by employing some tools to deal with inhomogeneity and interaction. We consider the different cell sub-types within the tumour microenvironment or tumour immune microenvironment to provide helpful information on how cells behave and how their distribution is affected. We also consider various exogenous factors simultaneously. This could allow future medical or clinical decisions regarding the patient to be positively influenced by the knowledge acquired about this cellular dynamic. Analysing densities and interactions between points in some spatial domain is a primary pursuit in spatial statistics <cit.>. Some real datasets have motivated these analyses; for example, biology <cit.>, neuroscience <cit.> and ecology <cit.>. Commonly, the literature describes multivariate point patterns through second-order summary descriptors such as the K- or J- functions in their multitype versions <cit.>. There are other methods for multitype point patterns, such as the mark connection function more suitable for detecting mark correlation in an exploratory analysis <cit.>. Testing spatial independence between two components of a stationary bivariate spatial process is a well-known problem in the literature <cit.>. Gibbs point processes are a wide class that includes, for example, all Cox processes and all finite point processes having a density with respect to the Poisson process <cit.>. Gibbs processes are motivated by statistical physics and arise from the forces acting on and between particles in a fluid or gas. We start by assuming that the total potential energy V(·) corresponding to a given configuration of particles, that is, an instantaneous snapshot, can be disaggregated into different terms that represent the potential energies of the individual particles (which can come from external force fields), the interactions between particles taken in pairs, triples, etc. Often it is assumed that only the first and second-order terms need to be included. Then, a representation of the total potential energy for n particles X={ξ_i}_i=1^n would be given by <cit.> V(X)=V(ξ_1,…,ξ_n)=∑_j=1^n ∑_1≤ i_1< ⋯ < i_j≤ n V_j(ξ_i_1,…,ξ_i_j), where V(·) is the interaction potential of order j. One of the principles of statistical mechanics establishes that, in equilibrium, the probability density of a point pattern, that is, a particular configuration of points, is inversely proportional to the exponential of the potential energy; that is, proportional to e^-V(X)/T, where T is the temperature. Potential energy is the total work required to move the particles to form the point pattern X. Markov point processes, a virtual subclass of these Gibbs processes where the interaction range of particles is assumed to be finite, are flexible statistical models for spatial point patterns <cit.>. In this paper, we propose a novel approach that leverages several spatial statistical techniques to model the distributions of cells in tumour immune microenvironments flexibly. We employ a non-small cell lung cancer (NSCLC) dataset collected by multiplex immunohistochemistry (mIHC) <cit.>. The data provide tissue samples collected from 122 non-small cell lung cancer patients. These samples were processed to isolate the tumour immune microenvironments and obtain marked point patterns where each point represents an immune cell, which is marked as belonging to one of five immunity markers (see Section <ref>). Similarly, the dataset includes some clinical factors such as age, whether the patient has undergone chemotherapy, the stage of the disease and survival time. Our main objective is to develop a multitype inhomogeneous point process model for the cells of the tumour immune microenvironment that includes the acquired contextual knowledge, i.e., including the marks of immunity, the clinical factors of the patients and the possible interaction between cells of the same and different types. We formulate an accurate statistical model and validate it to answer the scientific question behind our research objective, taking advantage of all the data components. To do this, we start from a general principle of points interacting in Gibbs' fashion and their probability distribution. We incorporate several factors, such as the trend, whose baseline is estimated using non-parametric techniques. We add a multitype pairwise interaction component inspired by cell dynamics. We use several methods for estimation and inference: the descriptive input of second-order statistics such as Ripley's K-function, the profile pseudolikelihood, and the maximum pseudo-likelihood maximisation. In addition, since the data were collected from several patients, we take advantage of methods related to replicated point patterns to feed the model and gain more robust estimates of the model parameters. The remainder of this article is organised as follows. We describe the tumour immune microenvironment dataset in Section <ref>. Section <ref> contains the fundamental notions about Gibbs's processes. In Section <ref>, we introduce the Fiksel interaction function in its multitype version and describe the methods we utilise to make statistical inference. In Section <ref>, we detail the analysis of the tumour immune microenvironment dataset step by step. We estimate the model's components by combining several techniques starting from the corresponding geometric considerations and also compare the model's performance with several other alternative models. We define a type of root mean square error (RMSE) based on residuals to facilitate comparisons. We end with some comments, directions for future research, and final considerations in Section <ref>. § IMMUNE CELLS DATA The cellular composition of the tumour immune microenvironment can be studied through multiple well-known techniques in digital pathology <cit.>. In this work, the data come from multiplex immunohistochemistry (mIHC), which allows the evaluation of multiple markers in a single experiment and may detect the spatial location of multiple cell types. <cit.> used multispectral quantitative imaging on the lung adenocarcinoma tumour microenvironment in 153 patients with resected tumours. The data consist of a single slide per patient, where they evaluated the tumour microenvironment with markers for CD3, CD8, CD14, CD19, major histocompatibility complex II (MHCII), cytokeratin, and 40,6-diamidino-2-phenylindole (DAPI). Then they performed image analysis, including tissue segmentation, phenotyping, and attached spatial coordinates. The data are available at <cit.>. Specifically, the data associated with each patient comes in a point pattern format representing the phenotype map of CK^+ cancer cells, CD4^+ (CD3^+CD8^-) T-cells, CD8^+ T-cells, CD14^+ cells and CD19^+ B-cells. A random (patient 45) processed tissue sample is displayed in Figure <ref>. The data for this study came from the Mayo Clinic Lung Cancer Repository, where they ensured compliance with applicable ethical and data protection protocols. The selected patients (a total of 122) underwent curative surgical resection of lung adenocarcinoma between 2004 and 2007. These patients had not received targeted anticancer therapy and had available residual tumour specimens. In addition, extra information related to each patient was extracted, which will be considered design covariates (non-spatial). The covariates are, gender (56% women), age at the time of surgery (a mean of 68 years), stage of cancer (42% IA, 23% IB, 10% IIA, 10% IIB, 12% IIIA, 1% IIIB, 3% IV), cancer cell MHCII status (67% high (≥ 0.5%)), survival days (a mean of 2389 days, i.e., approx. six years and a half), death (44% dead), recurrence or dead event (38% of no recurrence), adjuvant therapy (86% of no therapy). To explain the state of cancer, we follow <cit.>. Stage I is divided into stages IA and IB. In stage IA, the tumour is only in the lung and up to 3. At this stage, cancer has not spread to the lymph nodes. In stage IB, the tumour size lies between 3 and 4, and cancer has not spread to the lymph nodes. Stage II is also divided into two categories, IIA where the tumour lies between 4 and 5 and cancer has not spread to the lymph nodes, and IIB, where the tumour lies between 4 and 5 and cancer has spread to lymph nodes on the same side of the chest as the primary tumour; the lymph nodes with cancer are in the lung or near the bronchus. Stage III is divided into three categories, IIIA, where the tumour is up to 5 and cancer has spread to lymph nodes on the same side of the chest as the primary tumour; the lymph nodes with cancer are around the trachea or aorta, or where the trachea divides into the bronchi. In IIIB, the tumour is up to 5, and cancer has spread to lymph nodes above the collarbone on the same side of the chest as the primary tumour or to any lymph nodes on the opposite side of the chest as the primary tumour. In IIIC, the tumour may be any size, and cancer has spread to lymph nodes above the collarbone on the same side of the chest as the primary tumour or to any lymph nodes on the opposite side of the chest as the primary tumour. Finally, stage IV, where the tumour may be any size and cancer may have spread to the lymph nodes. MHCII status indicates the presence of MHCII molecules, a class of major histocompatibility complex (MHC) very important in initiating immune responses <cit.>. The next factor is survival days, the time in days from the date of diagnosis to the date of death or the date when data collection stopped (censoring). Death factor establishes whether the participant passed away during the data collection. Recurrence or dead event informs about whether or not the participant had a recurrence or died. Finally, adjuvant therapy establishes whether or not the participant received adjuvant therapy, understood as any additional cancer treatment given after the primary. Figure <ref> summarises this clinical information. § GIBBS MODELS Interaction is a fundamental concept between points that often cannot be observed through second-order descriptors as these functions measure correlation and not causal interaction <cit.>. Gibbs point processes models, also called Markov point processes, explicitly hypothesise that interactions occur between points in the process. These models can mimic a wide range of point patterns and can easily combine repulsion and attraction at different scales. In practice, Gibbs models only produce weak inhibition or clustering; they are helpful for modelling non-strongly clustered patterns. These models can be built based on the concept of Papangelou conditional intensity <cit.>. §.§ Papangelou conditional intensity The conditional intensity function is a valuable statistic for studying and modelling point patterns <cit.>. It describes the probability of observing a cell of type m∈ℳ conditional on the configuration of its neighbours in the observation window W. We consider a realisation of a marked (multitype) point process 𝐗, as a set X={(𝐱_i, m_i)}_i=1^n, where 𝐱_i∈ W are the spatial locations, and m_i∈{1,2,…, M} are the types. For simplicity, we may denote a marked location (𝐱_i,u_i) as ξ_i. If f(X) is the joint probability density function of a multitype point pattern X, this function can be written as <cit.>, f(X)= ζ[∏_i=1 ^n B_m_i(𝐱_i)][ ∏_i<jΦ_m_i,m_j(𝐱_i,𝐱_j)], where ζ is known as normalising constant, and it is usually intractable <cit.>, B_m(𝐱) is a non-negative first-order trend of points of type m, and Φ_m,m'(𝐱,𝐱'), m,m'∈{1,…,M} are pairwise interaction functions for points of types m and m'. Then the Papangelou conditional intensity or simply conditional intensity at any point 𝐱 of type m of the observation window W is defined as λ(ξ|X)= λ((𝐱,m)|X)= f(X ∪{ (𝐱,m)} )/f(X ) = B_m(𝐱)∏_i=1^nΦ_m,m_i(𝐱,𝐱_i), where X ∪{ (𝐱,m)} is the extended point pattern obtained adding (𝐱,m)=ξ to the coordinates set. The conditional intensity at a data point ξ_i is defined as λ(ξ_i|X):= λ(ξ_i|X \ξ_i), i.e., removing the data point ξ_i in the denominator. §.§ Potential energy We can specify a Gibbs model by writing a formula for the probability density f(X) as a product of the terms associated with each interaction. We call the (negative) potential of the model to the logarithm of the probability density V(X) = log f(X). The potential might be written as V(X) = V_0 + V_1(X) + V_[≥ 2](X), where V_0 is a constant and V_1 and V_[≥ 2] represent the spatial trend and the spatial interactions (of all orders), respectively. This enables us to define, for instance, a pairwise interaction model; we would define the log trend V_1(·) and the pair potential V2(·,·) for all points. There is a significant technical piece behind this theory <cit.>. Still, the key point is that we may formulate Gibbs models with an arbitrary first-order spatial trend term V_1({ξ}) = Z(ξ ). Then, we have to choose the interaction term V_[≥ 2](·) from a list of well-studied higher-order potentials whose integrability properties are known. For the conditional intensity, we have λ(ξ |X)=exp{Δ_ξ V(X) }, where Δ_ξ V(X)=V(X ∪{ξ})- V(X). §.§ Fitting Gibbs models The idea is to model the conditional intensity in the following way, logλ(ξ |X) = B(ξ ) + η^⊤ Z(ξ ) + ϕ^⊤T(ξ|X), where B(ξ) is an optional, real-valued function representing a baseline or offset. The first-order term Z(ξ ) describes spatial inhomogeneity or covariate effects. The higher-order term T(ξ |X) describes interpoint interaction, and η and ϕ are parameters. § FIKSEL INTERACTIONS Motivated by some exponential decay observed in cellular contexts <cit.>, we assume that the interaction energy between every pair of immune cells (the pair potential) decreases exponentially. <cit.> proposed a bivariate pair potential function. As a step further, we display its straightforward multitype version given by Φ_ij(r):= -∞ 0≤ r < h_ij, c_ij·exp{-γ_ij· r} h_ij≤ r < R_ij, 0 R_ij≤ r, {h_ij}, {c_ij}, {γ_ij} and {R_ij} are parameters, and i,j∈ℳ. The parameter h_ij is the hardcore distance between types; points of types i and j must be separated at least by a distance h_ij. The interaction strength parameter c_ij controls the type of interaction; it is zero for independent processes, positive for attractive processes and negative for repulsive processes. The rate or slope γ_ij controls the decay of the interaction between i and j as the distance increases. The interaction range R_ij means that points beyond this distance do not interact. §.§ Inference §.§.§ Pseudolikelihood For a multitype pairwise interaction process, the pseudolikelihood can be written as <cit.> PL = [∏_i=1 ^n B_m_i(𝐱_i)][ ∏_i<jΦ_m_i,m_j(𝐱_i,𝐱_j)] ·exp{- ∑_m∈ℳ∫_W B_m(𝐮)∏_i=1^nΦ_m,m_i(𝐮,𝐱_i) 𝐮}, where ℳ is the set of types. §.§.§ Berman and Turner's device A Berman-Turner device is a computational tool for approximating maximum pseudolikelihood estimates <cit.>. It generates a set of marked points, including both a set of dummy points and the data points and forms a good quadrature rule for W ×ℳ. A quadrature rule is an approximation of an integral ∫_W f(𝐮) 𝐮 as a weighted sum ∑_j w_j f(𝐮_j) of the function values at specified points (quadrature points) within the integration domain. The weights w_j are quadrature weights that sum to |W|. <cit.> proposed a practical scheme to select the weights; it partitions W into tiles of an equal area where each tile has a dummy point. Consider the Cartesian product of a set of quadrature points in W and the set ℳ. We write the marked points as (𝐮_j, v_ℓ) for j=1,…,J and ℓ =1,…,L where 𝐮_j ∈ W and k_ℓ∈ℳ. Then we define the indicator z_j ℓ to equal one if (𝐮_j,k_ℓ) is a data point and zero if it is a dummy point. Let w_j ℓ be the corresponding weights for a linear quadrature rule in W×ℳ. Then the pseudolikelihood is approximated by logPL≈∑_ℓ = 1^L ∑_j=1^J (υ_jℓlogλ_j ℓ - λ_j ℓ) w_j ℓ, where λ_j ℓ:= λ((𝐮_j,k_ℓ)|X), υ_j ℓ:=z_j ℓ / w_j ℓ, z_j ℓ:= 1 if 𝐮_j is a data point, z_j ℓ:= 0 if 𝐮_j is a dummy point and the weights w_j ℓ are the areas of the tiles. The log-likelihood given in (<ref>) has the form of a weighted (w_j ℓ) log-likelihood of Poisson random variables 𝒴_j ℓ with expected values λ_j ℓ <cit.>. It can be handled computationally using generalised linear models techniques <cit.> or even generalised additive models <cit.>. §.§ Replicated point patterns methodology Assume that we have g experimental units and that the response from unit k≤ g is a multitype point pattern X^k observed in a window W^k. We assume that the point patterns { X^k}_k=1^g are independent conditional on the covariates and random effects. The conditional intensity for the kth point pattern is λ^k(ξ |X)=exp{Δ_ξ B^k(X) + θ^⊤Δ_ξ Y^k(X) }, where Y^k(X) := (Y_1^k(X), …, Y_p^k(X) ), are vector-valued functions representing fixed effects. Notice that random effects can be included easily as an additional term in the right of Eq. (<ref>). Furthermore, notice that every function f(X) of a point pattern X can be expressed as f(X)=f_[1](X) + f_[≥ 2](X), where f_[1](X)=∑_ξ_i∈ X f({ξ_i}), is the first-order component and f_[≥ 2](X)=f(X) - f_[1](X) is the interaction term <cit.>. When we apply this decomposition to the functions B^k(X) and Y^k(X), we retrieve the first-order components B_[1]^k(ξ), Y_[1]^k(ξ) that resemble the offset and the covariate effects in Eq.(<ref>), and the interaction components B_[≥ 2]^k(ξ) and Y_[≥ 2]^k(ξ). In our particular case, we assume that the interaction has canonical parameters; therefore, B_[≥ 2]^k(ξ) vanishes and Y^k_[≥ 2](X) comes from a pairwise interaction Fiksel form (see Section <ref>). The log-pseudolikelihood is given by logPL = ∑_k=1^g ∑_(𝐮_i,m_i) ∈ X^kλ^k((𝐮_i,m_i)|X^k) - ∑_k=1^g ∑_m∈ℳ∫_W^kexp{λ^k((𝐮,m)| X^k) }𝐮, this expression is equivalent to the pseudolikelihood of a Gibbs process on the disjoint union of the windows <cit.>. § IMMUNE CELLS MODEL In this section, we develop a multitype Fiksel interaction model for lung cancer patients' tumour immune microenvironments. First, we define a common window for all the observations of different patients. We then combine two techniques, maximum profile pseudolikelihood and maximum pseudolikelihood, to estimate the model parameters. We then proceed to propose a set of extra models in order to compare the performance of our model with those of others. We make the comparison using residual measures and the RMSE (or its analogue in this context) that we define to be able to summarise the residuals. §.§ Observation window and edge correction Since the tissue block extraction process was consistently done on 5 slides, the observation windows are the same in theory but slightly different in practice due to measurement errors and precision. To alleviate this effect, we will consider each patient's observation window W^ℓ as a dilation of the convex hull that contains the data (by 1/√(1 - ω_ℓ / n_ℓ); n_ℓ is the number points of X^ℓ, and ω_ℓ is the number of vertices of the convex hull of the patient ℓ) <cit.>. Once we have these windows, the final observation window, which is also common for all patients, is defined as W:=⋂_ℓ = 1^151 W^ℓ. When the goal is to make inference, it is important to assume that the data is a realisation of a finite point process defined only within W (bounded case) or a partially observed realisation of a point process that extends along a bigger domain only through the window W (unbounded case). In our context, we must assume that our point patterns are partially observed realisations. This is due to two reasons, first is that the tissues analysed before imaging are just samples of larger tissue (lungs in this case). Second, we cut the windows to make a common window through Eq. (<ref>). There can be edge-effect problems in the unbounded case <cit.> since some information might come from unobserved points outside the final observation window. There are several methods in the literature to alleviate this type of effect <cit.>. In our case, we use the well-known border method <cit.>, which obtains the pseudo-likelihood integration domain by cutting a width margin r from the original observation window. §.§ Trend and interaction terms We incorporate inhomogeneity into our model through an offset in which we non-parametrically estimate the total first-order intensity of each of our point patterns. The total intensity function is defined as B_∙(𝐮)=∑_m∈ℳB_m(𝐮), <cit.>. In this way, we generate a smooth estimate of the expected value of the number of immune cells at each point in the observation region, considering all cell types simultaneously. This estimation is made through a spatial Gaussian kernel with adaptive bandwidth <cit.>. This estimator is defined as follows, B̂_m(𝐮) =1/e_ϵ(𝐮) ∑_𝐮_i ∈ X_mK_ϵ(𝐮_i)(𝐮-𝐮_i), 𝐮∈ W, m∈ℳ, where K(·) is a Gaussian kernel, ϵ(·) is a bandwidth function and e_ϵ(𝐮) is an edge correction <cit.>. The estimates for B_∙(𝐮) and for B_CD14^+(𝐮), B_CD19^+(𝐮), B_CD4^+(𝐮), B_CD8^+(𝐮) and B_CK^+(𝐮) are shown in Figure <ref>. Additionally, we consider the design covariates, which, although not spatial, correspond to factors that influence the overall conditional intensity. These factors have been explained in Section <ref> and are related to patients' clinical information; we denote them by Z. We introduce the interaction between cells into the model through the term T(𝐮,m). This inclusion of interaction entails the assumption of several facts; in this case, we assume that we have the same interaction for all patients, that is, the Fiksel interaction defined by the Φ_ij(r) function given in Eq. (<ref>). We also assume we have the same sets of parameters {h_ij}, {γ_ij},{R_ij} and {c_ij} for all patients. Modifying this assumption would correspond to having a previously identified mechanism that could alter these parameters for different groups of patients or, in the worst case, assuming that each patient has their own isolated set of parameters. This decision would make the model highly complex and likely cause overfitting problems. Therefore, we opt to choose the most parsimonious model in this case. §.§.§ Irregular parameters When a parameter of a point process model does not appear in the log-linear form (<ref>), it is called irregular <cit.>. In contrast, the other parameters are called regular. Consider, for example, a model of the form, logλ_ϑ((𝐮,m)|X) = φ^⊤· Z((𝐮,m), ψ|X) where ψ is the vector of irregular parameters, φ is the vector of regular ones and ϑ := (φ, ψ). For every fixed value of the irregular parameters, the model is log-linear in the regular ones, i.e., if we fix the values of ψ, then the model is log-linear in φ; this model can be fitted using maximum pseudolikelihood over φ <cit.>. For retrieving a maximum profile pseudolikelihood estimate, we assign a value to ψ, then, the pseudolikelihood PL(φ, ψ) can be maximised over all possible values of φ, PPL(ψ)=max_φPL(φ, ψ). The maximum pseudolikelihood estimate of ϑ can be obtained by maximising the profile pseudolikelihood over ψ. Hardcore distances A maximum likelihood estimator of the hardcore radii is the minimum nearest-neighbour distance amongst the points with different labels <cit.>. Given that we have replicated patterns, we choose the minimum across the replicates for every matrix entry, so distinct points are not permitted to come closer than this minimum apart. The estimate is given by ĥ_ij = [ CD14 CD19 CD4 CD8 CK; CD14 0.498 0.496 0.497 0.497 0.495; CD19 0.496 0.499 0.496 0.495 0.481; CD4 0.497 0.496 0.498 0.496 0.496; CD8 0.497 0.495 0.496 0.498 0.497; CK 0.495 0.481 0.496 0.497 0.499 ]. We observe similar values in the ĥ_ij entries. This means that the tumour immune microenvironment cells could share a common hardcore distance, which could simplify the model since instead of considering a matrix of hardcore distances, we could only consider a positive scalar, given, for example, by min_ij{ĥ_ij}=0.481. Models with this type of simplification are shown in Section <ref>. Interaction range and rate or slope We must provide a suitable range of values for the parameters to apply the maximisation over the profile pseudolikelihood. For the interaction range, we can revise Ripley's K-function K_ij(r) (of its variance stabilised version, the L-function, L_ij(r)) in its multitype inhomogeneous version <cit.>. Roughly speaking, this function represents the expected value of the count of events of the type j, weighted by the reciprocal of the intensity at each point of type j, within distance r of an arbitrary event of type i. It could be estimated by K̂_ij(r)=1/|W|∑_𝐮_ℓ∈ X_i∑_𝐮_k∈ X_j1{||𝐮_ℓ-𝐮_k|| ≤ r}/B̂_i(𝐮_ℓ)B̂_j(𝐮_k)e(𝐮_ℓ, 𝐮_k;r), i,j∈ℳ, where 1{·} is the indicator function, and e(·) is an edge correction <cit.>. The L-function is intended to stabilise the K-function variance and it is defined as L_ij(r):=√(K(r)/ π). When the points. For illustration purposes, Figure <ref> displays the L-functions of the cells of the same type. L-functions of the same type (L_ii(r)) are the usual L-functions of the process X_i of points of the type i, meaning that the same interpretation applies as if there was no labelling. For example, the typical benchmark of L(r)=r for Poisson processes still applies here. We select a maximum possible interaction range so we may see a stable behaviour of the mean L-function (understood as the classical functional mean) across the patients behind such selected maximum; this value is 33.81 and it is displayed as a vertical black line in Figure <ref>. For a range for the rate or slope, we decide to take into account the scale of the data and assign γ_ij∈ [0.2, 0.2]. After applying the procedure of maximising the profile pseudolikelihood, we retrieve the estimations for R_ij and γ_ij R̂_ij = [ 27.11 21.00 18.03 20.03 24.42; 21.00 27.11 19.08 19.32 25.92; 18.03 19.08 27.11 16.40 23.13; 20.03 19.32 16.40 27.11 24.55; 24.42 25.92 23.13 24.55 27.21; ], γ̂_ij = [ 0.110 -0.066 -0.031 -0.041 -0.082; -0.066 0.073 -0.071 -0.080 -0.077; -0.031 -0.071 0.111 -0.052 -0.048; -0.041 -0.080 -0.052 0.111 -0.043; -0.082 -0.077 -0.048 -0.043 0.200; ]. §.§.§ Regular parameters There are ten regular parameters ν_1,…,ν_X and c_ij, i.e., all terms that appear in the conditional intensity log-linear form, one of them is the intercept, eight of them are the coefficients for each clinical covariates. The estimation procedure is done through the pseudolikelihood and the Berman-Turner approximation, considering the replicates. The estimated coefficients are shown in Table <ref>. All model design covariates were statistically associated with conditional intensity except for the recurrence variable (p-value of 0.911); i.e., the factor that reports whether the patient had a recurrence or died has no statistical impact on conditional intensity. The values in Table <ref> come from a generalised linear model, as detailed in Section <ref>. Therefore, it should be noted that the p-values are calculated based on traditional mechanisms. This means that the significance depends on the number of observations, roughly seven million in this case. With such a large number of observations, it is logical and expected that almost all the factors become statistically significant <cit.>, which is what happens in this case. To avoid a vague interpretation, we focus on the regression coefficients (exp{η}). Factors associated with reduced conditional intensity are gender, where men generally have lower immune cell counts than women; a low MHCII status has less intensity than high MHCII status; and death, where those who died showed less immune cell density than those who did not. Patients who received adjuvant therapy also show lower counts than those who did not; this may occur as immune cells may be found within cancerous tissues targeted by adjuvant therapies to be removed or killed. Regarding the disease status, we can observe an intensity increase in patients in stage IV compared to those in stage IA and a decrease in patients in stage III. Fitted interaction strength The other parameter of the model is the strength of the Fiksel interaction term c_ij. ĉ_ij = [ 1.3052 0.9995 0.9994 0.9998 0.9996; 0.9995 1.2171 0.9997 0.9996 0.9996; 0.9993 0.9997 1.1951 0.9988 0.9999; 0.9998 0.9996 0.9988 1.4694 0.9993; 0.9996 0.9996 0.9999 0.9993 1.0473 ]. For illustration purposes, in Figure <ref>, we show the conditional intensities of CD14^+ cells considering their interaction with cells of the same type and their interaction with CD19^+ cells of a single patient included in our sample. The small size of the white dots represents the minimum distance of repulsion ĥ_ij, i.e., CD14^+ cells are prohibited from locating within 0.498 of other CD14^+ cells and within 0.499 of CD19^+ cells. Beyond this distance, the attraction decays exponentially with the distance according to the Φ_ij(r) function given in Eq. (<ref>) for cells of the same type. In the case of cells of a different type, the Φ function does not decay; instead, it increases due to the sign of the γ_ij parameter. However, the magnitude of this quantity is generally smaller for different cells, which makes the interaction's strength less in these cases. This type of behaviour, where the magnitudes of interaction are observed to be so small for cross-terms, makes us think of simpler alternative models, for example, an interaction model only within types. These models are discussed in Section <ref>. Figure <ref> shows each cell type's fitted conditional intensity logλ̂(ξ|X) from an arbitrarily chosen patient. This conditional intensity is evaluated in a regular mesh in the observation window W. We may see how the adjustment is satisfactory even in cases with few points, such as CD14^+, CD19^+ and CD4^+. We see a better fit for cell types with more points, CD8^+ and CK^+. This evaluated conditional intensity strongly suggests that the fit is adequate, considering the model is simultaneously set up for all patients. This gives us an idea of how suitable the model with the Fiksel interaction is for tumour immune microenvironment modelling. §.§ Assessing the model We want to test whether or not the model is working well; this model evaluation can be done in many ways. In this work, we compute some residual summaries of the proposed model. Residuals from point process models are a proper diagnostic measure for comparisons <cit.>. §.§.§ Comparing several models We consider several models to be able to compare their performance and finally opt for one or some. To do this, we rely on the fact that our model comprises three fundamental parts: an offset, other first-order effects (design covariates), and second-order effects (Fiksel's interaction). In order to understand how good the model is, we will propose several models where these different parts are included or not. We then consider three sets of models. In the first one, we include four models that have in common the multitype Fiksel-type interaction function that we propose in this article. The first is the model described in Section <ref>, containing all the components (Fiksel 1). The second is a model without first-order effects (Fiksel 2); the third considers only the effects of the design covariates but not the offset (Fiksel 3). Finally, the fourth model considers first-order effects, but an interaction function that, although it is Fiksel type, assumes that the interaction does not depend on different types of immune cells; that is, the interactions only occur between cells of the same type (Fiksel 4). The literature on multitype Gibbs models is not very huge as far as we know. Some classical univariate interaction functions have been extended to the multitype case <cit.>. For example, multitype Strauss, Hardcore and Strauss Hardcore models <cit.>. For the next set of models for comparison, we opt for holding the first-order terms and changing the pairwise interaction functions. So we choose multitype Strauss (Strauss), Hardcore (Hardcore), and Strauss Hardcore (Srt Hardcore) models. The interaction pairwise functions for these models are shown in Table <ref>. We have also decided to generate a last model with the first-order effects but no associated interaction function (Poisson). This model corresponds to a Poisson model to explain the conditional intensity function. It should be noted that the conditional intensity coincides with the first-order intensity of an inhomogeneous Poisson process under this assumption of no interaction between points <cit.>. §.§.§ Residuals <cit.> defined residuals and residual plots for Gibbs models for spatial point processes, providing a strategy for model criticism in spatial point process models. Their techniques resemble the existing methods for linear models, i.e., they represent the differences between the data and the fitted model. The raw residual measure can be defined as ℛ_m^k(B)=N(X_m^k∩ B) - ∫_B λ̂^k((𝐮,m)|X^k) 𝐮, ∀ B ⊆ W, m∈ℳ, k≤ g. This function can be estimated in any subset of the observation window; that is the rationale behind the term “measure”. Usually, a regular window partition is set to estimate the measure in each pixel as per density estimations. In practice, the residuals are often scaled to calculate, for example, standardised residuals. The analogue to Pearson's residuals in this context is given by ℛ^⋆ k_m (B)=∑_𝐮_i∈ X_mλ̂^k ((𝐮_i,m)|X^k)^-1/2 - ∫_B λ̂^k((𝐮,m)|X^k)^1/2𝐮. There is a third version of the residual measure called inverse λ residuals ℛ^† k_m (B)=∑_𝐮_i∈ X_m1{λ̂((𝐮_i,m)|X^k)>0}/λ̂^k((𝐮_i,m)|X^k) - ∫_B 1{λ̂^k((𝐮,m)|X^k)>0}𝐮. For comparison purposes, we need to summarise some residual measure ℛ(B), ℛ_(P)(B) or ℛ_(I)(B), thus we consider the total value (the integral) of these measures over the observation window W. As we have five different types of cells, we may obtain a total value for each patient. Then we retrieve 122× 5 total residuals. Figure <ref> summarises these residuals for each proposed model. From Figure <ref>, we can glimpse several exciting things; we see how the three residuals provide roughly the same information, although on different scales. We also see how CK+ cancer cells seem the most difficult to model since their residuals are the furthest from zero in all models. The model that we have proposed and its variants, that is, those with multitype Fiksel interactions, generally present a similar and very adequate behaviour, except perhaps for the Fiksel 4 model, that is, the one where it is assumed that immune cells of different types do not interact with each other; this model has greater variability than its counterparts. This good behaviour of the models of the first set allows us to glimpse that this function is appropriate to model this type of cells, which is in harmony with our motivation (see Section <ref>) to use this type of interaction. The Poisson model is presented as the most inadequate since it is not only the one that is furthest from zero but also presents the greatest variability; this suggests that the interaction between cells must be a fundamental part of any model proposed for this type of tumour immune microenvironments. The models whose interaction functions are Strauss or Hardcore do not generate good models either. Although the residuals of these cases are closer to zero than in the Poisson case, their variability is greater than that of the other cases considered. Of the alternative interaction functions, the Strauss Hardcore is the one that best manages to model immune cells; this model is the most competitive that we can find among the multitype models that are currently known. We especially highlight the Fiksel 2 model since, although far from being the best, it is surprisingly good at modelling cells without having first-order information. If we wanted to simplify the model, we could do without the first-order information and still obtain a successful model. This has positive implications in practice; for example, we do not need prior knowledge of the patients' clinical conditions to obtain information on the distribution of cells in the tumour immune microenvironment. Although we strive to obtain reasonable estimates of the offset (the expected value of the counts per unit area at each point in the observation window), it does not appear critical to obtaining a good model; the model Fiksel 3 confirms this fact as well. Root mean square error We wish to summarise our residuals in order to provide an overall notion of the performance of the models. For doing so, we first consider an overall residual measure across the type-cells given by ℛ^k_∙:=∑_m∈ℳℛ^k_m. We then define the Root mean square error in this context as RMSE = √(1/g∑_k=1^g(∫_Wℛ^k_∙)^2). Notice that we can straightforwardly extend this definition to Pearson's and inverse residuals. We then compute these residuals for every one of the considered alternative models. Table <ref> shows that the different types of residuals do not agree on a single best model. However, we can highlight our base model (Fiksel 1) as the best overall since it maintains low RMSE values across the other models. The model that assumes no interaction between cells of different types performs better regarding raw residuals; however, the Pearson and inverse residuals do not support this finding as much. This may be due to the variability in Figure <ref>. The Strauss Hardcore model is highly competitive; the Pearson and inverse residuals favour this model despite having a slightly higher RMSE based on raw residuals than the Fiksel 1 model. § DISCUSSION In this article, we have proposed a multitype Fiksel interaction model for tumour immune microenvironments and applied it to understand inhomogeneity and interaction patterns of a sample of digitalised tissues through digital pathology techniques from 122 patients with lung cancer. Throughout this article, we have explored various tools connected through a statistical model that includes several components, a first-order component also called a trend that includes, in turn, the estimate of the expected value of the number of cells in each one of the tumour immune microenvironments through a non-parametric kernel and several design covariates. We have included all possible interactions between cells of the same type and cells of different types in a single component that describes the interaction; this term is a Fiksel-type pairwise interaction function, and it comes from the Gibbs and Markov pairwise interaction processes <cit.>. In summary, we have shown that inhomogeneous multitype Gibbs processes provide effective tools for analysing tumour immune microenvironments. It is the first time this multitype version has been used in practice since only the bivariate version was initially proposed <cit.>. Given that the images processed in digital pathology are relatively new, little has been studied from a statistical and probabilistic perspective on the distribution of immune cells within the tumour immune microenvironment <cit.>. Then, some open questions could give rise to new and exciting research fields. For example, are there asymmetric interactions between cell types? In other words, does a kind of cells appear or are located within the tumour immune microenvironment first, and then the other types are distributed conditionally to the first type? Hierarchical interaction models might account for this type of cell behaviour and assign, for example, our multitype Fiksel interaction function in a conditional way. A conditional hierarchical model would express the probability function such that f(X)=f_1(X_1)f_2|1(X_2|X_1)f_3|1,2(X_3|X_1,X_2)⋯ f_M|1,2,…,M-1(X_M|X_1,…,X_M-1), where X=∪_m∈ℳX_m has M point types, and X_m-1 takes precedence over X_m for every m∈ℳ. Each probability density f_m|1,2,…,m-1 is a pairwise interaction density that assembles all the information about the preceding terms, including the normalising constant, the trend and the interaction function <cit.>. On the other hand, the alternative models (see Section <ref>), particularly Fiksel 2 and 3, have shown that the assumption of homogeneity could be reasonable in this context. Unfortunately, a homogeneity test through quadrat counting <cit.> would be inadequate given the dependency between cells; therefore, we cannot formally rule this homogeneity out. Nevertheless, although the first-order factors are statistically significant (see Section <ref>), the performance of these models remains similar to those that include first-order terms. Therefore, a homogeneous, more parsimonious model, such as the ones we have offered, may be adequate in this case. It is important to highlight that our estimates are point estimations and could be improved. for example, through simulation using Metropolis-Hastings algorithms for Gibbs processes <cit.>. These algorithms can provide confidence intervals for the associated parameters. As an interesting future research direction, the proposed model also offers a good opportunity to estimate parameters using other approximate Bayesian computational methods, such as the variational Bayesian method <cit.>. One of the problems we face is the amount of data that results when applying the procedures of generated linear models computationally. Although we have very well-optimised software nowadays, sometimes it is not enough. In our case, we have about seven million records that the regression algorithm must process and the other point process techniques described throughout the paper, such as kernel smoothing and K-function calculation, which fortunately were calculated only for each patient. The problem we have faced goes beyond any processing speed; the problem is the vast amount of memory required to do all the calculations, which requires serious computational resources. That is why an interesting line of research could include how to bring these computations in the context of this type of digital pathology images to the comfort of a conventional laptop. We conclude that multitype inhomogeneous Gibbs models are a convenient statistical option for tumour immune microenvironment analysis. In particular, the Fiksel interaction function is satisfactory for studying the interaction between cells of the tumour immune microenvironment. These models can easily include extra clinical information available per individual, although they are robust enough to provide good results even without this first-order information. These models also allow estimation and inference through the computational simplification offered by pseudolikelihood methods. agsm
http://arxiv.org/abs/2307.03870v1
20230708005332
Opacity of Parametric Discrete Event Systems: Models, Decidability, and Algorithms
[ "Weilin Deng", "Daowen Qiu", "Jingkai Yang" ]
cs.FL
[ "cs.FL", "cs.SY", "eess.SY" ]
Opacity of Parametric Discrete Event Systems: Models, Decidability, and Algorithms Weilin Deng, Daowen Qiu^⋆, and Jingkai Yang Weilin Deng is with the School of Internet Finance and Information Engineering, Guangdong University of Finance, Guangzhou, 510521, China (e-mail: [email protected]). Daowen Qiu (Corresponding author) is with the Institute of Quantum Computing and Computer Theory, School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou, 510006, China (e-mail: [email protected]). Jingkai Yang is with the School of Mathematics and Statistics, Yulin Normal University, Yulin, 537000, China (e-mail: [email protected]). =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Finite automata (FAs) model is a popular tool to characterize discrete event systems (DESs) due to its succinctness. However, for some complex systems, it is difficult to describe the necessary details by means of FAs model. In this paper, we consider a kind of extended finite automata (EFAs) in which each transition carries a predicate over state and event parameters. We also consider a type of simplified EFAs, called Event-Parameters EFAs (EP-EFAs), where the state parameters are removed. Based upon these two parametric models, we investigate the problem of opacity analysis for parametric DESs. First of all, it is shown that EFAs model is more expressive than EP-EFAs model. Secondly, it is proved that the opacity properties for EFAs are undecidable in general. Moreover, the decidable opacity properties for EP-EFAs are investigated. We present the verification algorithms for current-state opacity, initial-state opacity and infinite-step opacity, and then discuss the complexity. This paper establishes a preliminary theory for the opacity of parametric DESs, which lays a foundation for the opacity analysis of complex systems. Opacity, discrete-event systems, parametric finite automata, extended finite automata § INTRODUCTION Over the last ten years, the problem of opacity analysis for discrete event systems (DESs) received considerable attention. Opacity is an important security property, which was initially introduced in computer science to analyze cryptographic protocols. Roughly speaking, a DES is said to be opaque, if the intruder cannot determine the occurrence of the secret behavior by his observations to the system. Finite automata (FAs) model is a popular tool to describe DESs in logical level due to its succinctness <cit.>. The notions of language-based opacity <cit.>, <cit.>, current-state opacity <cit.>, initial-state opacity <cit.>, infinite-step opacity <cit.> and pre-opacity <cit.> for FAs model were well investigated in recent years. In addition, the opacity enforcement based on the techniques of supervisory control and output obfuscation were proposed (e.g., see <cit.>-<cit.> and references therein). For some complex systems, it is difficult to describe the necessary details and analyze their opacity properties by means of FAs model, and thus some extended models are necessary. Actually, the opacity properties for various extended models were investigated recently, such as time systems <cit.>, networked systems <cit.>, Petri nets <cit.>-<cit.>, cyber-physical systems <cit.>, probabilistic systems <cit.> fuzzy systems <cit.>, and the other systems <cit.>-<cit.>. In the field of system modeling, control flow refers to the possible sequences of the interactions between a system and its environment, and data flow refers to the constraints on data parameters in the interactions <cit.>. FAs model well describes control flow, but fails to capture data flow and the mutual influence between control flow and data flow efficiently. A typical example is modeling network protocols, where the models must characterize how different parameter values in sequence numbers, user IDs, socket IDs, etc., affect the control flow. Another easy-to-understand example is modeling the process of web-site registering that usually requires a user to provide her/his identical password twice (see Examples <ref>-<ref> and Remark <ref> in Section II for details). Obviously, it is difficult and inefficient for FAs model to do such things. To address this problem, in this paper, we also consider a kind of extended finite automata (EFAs), in which the states and events are both augmented with parameters, and each transition carries a predicate and an update function over these parameters. The EFAs model is a powerful but complicated tool. It is hard to analyze some properties of EFAs, and we prove that the opacity properties of EFAs are undecidable. Thus, we also consider a simplified EFAs model, called Event-Parameters EFAs (EP-EFAs), where the state parameters are removed. By means of the transitions carrying predicates over parameters, the models of EFAs and EP-EFAs improve FAs model in efficiently representing and handling some complex systems where control flow, data flow and the interactions between them are required to be characterized. In general, EFAs and EP-EFAs can be viewed as a special type of infinite and finite state models, respectively, with infinite alphabet, which have been well investigated in computer science (e.g., see <cit.>-<cit.>). For the general infinite state automata (ISA), only a few properties are decidable <cit.>, <cit.>, However, for some types of ISA, there exist quite a few decidable properties, e.g., the properties of reachability, simulation and eventuality of Well-Structured ISA are all decidable <cit.>. On the other hand, for the finite state models with infinite alphabet, there are many decidable properties, as well as undecidable properties <cit.>. For example, the emptiness and language inclusion of 1N-RAs are decidable; however, its universality and equivalence are undecidable <cit.>. In this paper, the aforementioned EFAs and EP-EFAs are referred to as parametric DESs collectively. We would like to establish a preliminary theory for the opacity of parametric DESs, which lays a foundation to analyze the opacity of some complex systems. To the best of our knowledge, this is the first study on the opacity analysis of parametric DESs. The main contributions of this paper are as follows. * Two parametric models, i.e., EFAs and EP-EFAs, are introduced for DESs, and then it is proved that the latter can be simulated by the former but the reverse does not hold. This means that EFAs model is more expressive than EP-EFAs model. We also illustrate that these two parametric models are both more expressive and efficient than FAs model. * We formulate the current-state opacity, initial-state opacity and infinite-step opacity for parametric DESs, and then prove that these opacity properties for EFAs are all undecidable in general. The basic idea of the proof is reducing the halting problem of Two-Counter Machines (2CMs) to the verification of the opacity properties. * We investigate the decidable opacity properties for EP-EFAs. Based on the symbolic observer, the verification algorithms for current-state opacity, initial-state opacity and infinite-step opacity are provided, and the complexity is analyzed. The rest of this paper is organized as follows. The system models for parametric DESs are introduced and investigated in Section II. The problem formulation and necessary assumptions are provided in Section III. In Section IV, the opacity properties of EFAs are proved to be undecidable, and in Section V, the decidable opacity properties of EP-EFAs are studied. Finally, Section VI concludes this paper. § PARAMETRIC MODELS In this section, we present some notations, and introduce two parametric models: extended finite automata (EFAs) and Event-Parameters EFAs (EP-EFAs), and then discuss their expressiveness and efficiency. Let ℕ be the set of natural numbers, and [m:n] be the set of integers {m,m+1,…,n}. Let Σ be an alphabet, Σ^* be the set of finite strings over Σ including empty string ϵ, Σ^k be the set of strings of length k, and Σ^≤ k be the set of strings of length i, i ∈ [0:k], over Σ. A language L over Σ is a subset of Σ^*. We denote by |Ω| the number of elements in the set of Ω, and by |s| the length of the string s ∈ L with a slight abuse of notation. A discrete event system (DES) is usually modeled as a finite automaton H=(Q, q_0,Σ, δ) <cit.>, where Q is the finite set of states, Q_0⊆ Q is the set of initial states, Σ is the finite set of events, and δ:Q ×Σ→ Q is the deterministic (partial) transition function. The transition function δ can be extended to domains Q ×Σ^* and 2^Q × 2^Σ^* by the usual manner. The generated language by H is L(H) = {s | ∃ q_0∈ Q_0, q ∈ Q, s.t. q = δ(q_0,s) }. A Boolean algebra is a tuple 𝒜=(𝒰, Ψ, ∙), where 𝒰 is the universe of discourse, and Ψ is the set of predicates closed under the Boolean connectives, substitution, equality and if-then-else terms <cit.>. The element φ∈Ψ is called an 𝒰-predicate in 𝒜, or just predicate when 𝒰 and 𝒜 are clear from the context. The denotation function ∙: Ψ→ 2^𝒰 maps a predicate to the valuations of variables that make the predicate true. Hence, for any φ, ψ∈Ψ, φ∧ψ = φ∩ψ, φ∨ψ = φ∪ψ, φ = 𝒰\φ <cit.>. For the true predicate ⊤ and false predicate , we have ⊤ = 𝒰 and = ∅. For any φ∈Ψ, φ is said to be satisfiable, denoted by isSat(φ), if φ≠∅. This paper solely focuses on Boolean algebras in which the predicate satisfiability is decidable. Throughout this paper, we denote by X and Y the (infinite or finite) domains of event and state parameters, respectively, and denote by x and y (with superscript and subscript usually) the event and state parameters, respectively. In addition, we use a, b (with superscript and subscript usually) to denote the specific values of event and state parameters, respectively. The model of extended finite automata (EFAs) is defined as follows. An extended finite automaton (EFA) is defined as E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) where Q is the finite set of state tags, Σ is the finite set of event tags, X is the domain of one event parameter, Q_0⊆ Q and Q_m⊆ Q are the sets of tags of initial and marked states, respectively, Y is the domain of the state parameter and Y_0⊆ Y is the domain of parameter for initial states, and R is the set of symbolic transitions and each symbolic transition r ∈ R is of form q q where * q ∈ Q and q∈ Q are the tags of source and target states, respectively, which carry state parameters y_q∈ Y and y_q∈ Y, respectively; * k ≥ 0, the step-length of the transition, is the size of the tuple of event parameters in this transition; * σ∈Σ is the tag of the event, and if k ≥ 1, it carries a k-tuple of event parameters ⟨ x_σ^1, x_σ^2, …, x_σ^k⟩, x_σ^i∈ X, i ∈ [1:k], otherwise, it carries no event parameter; * φ is the guard of transition r, and it is a (Y × X^k)-predicate if k ≥ 1 otherwise a Y-predicate, and if event σ occurs at state q with the proper values of parameters to enable φ, then the transition r may be fired; * ξ, a Y × X^k→ Y function if k ≥ 1 otherwise a Y → Y function, is responsible for updating the parameter of target state according to the given parameters of source state and event when the transition r is fired. We denote by Ξ the special updating function that does nothing. If there are multiple transitions that can be fired at a state, then only one of them is fired nondeterministically. E is said to be deterministic, if no more than one transition can be fired synchronously at each state, i.e., for two different transitions q q_1 and q q_2, φ_1(b, ⟨ a_1, a_2, …, a_k⟩) ∧φ_2(b, ⟨ a_1, a_2, …, a_k⟩) dose not hold for any state parameter value b and event parameters values ⟨ a_1, a_2, …, a_k⟩. Moreover, the implicit ϵ-selfloop q q can be viewed as the special 0-step-length transition q q. The step-length of E is defined as the maximum of the step-lengths of the symbolic transitions in E. Actually, the symbolic transition q q, k ≥ 1, defines the set of concrete transitions {(q,b) (q,b) | (b, ⟨ a_1, a_2, …, a_k⟩) ∈φ ∧b = ξ(b,⟨ a_1, a_2, …, a_k⟩) }, where (q,b) and (q,b) are the source and target states, respectively, and σ⟨ a_1, a_2, …, a_k⟩ is the parameterized event of the concrete transition. For example, suppose X=Y={0, 1,2}, the symbolic transition q q denotes the set of concrete transitions {(q,0) (q,0), (q,1) (q,2)}. The symbolic transitions allow EFAs model to efficiently characterize the control flow (i.e., the possible sequences of fired transitions), data flow (i.e., the constraints on event parameters) and their interactions in a system. A parameterized string is a sequence of parameterized events, and a parameterized language is a set of parameterized strings. For a parameterized string u = v_1v_2… v_n, where v_i=σ_i⟨ a_i^1, a_i^2, …, a_i^k_i⟩, k_i≥ 0 [ if k_i= 0, then v_i=σ_i and 𝔇(u) = 𝔇(v_1… v_i-1v_i+1… v_n).], the data string of u, denoted by 𝔇(u), is obtained by stripping all the event tags σ_i, i.e., 𝔇(u) = ⟨ a_1^1, a_1^2, …, a_1^k_1⟩ ⟨ a_2^1, a_2^2, …, a_2^k_2⟩ … ⟨ a_n^1, a_n^2, …, a_n^k_n⟩ is a sequence of event parameter tuples. Intuitively, a data string is a sequence of data exchanges between the system and its environment that meet the data constraints. The flat data string is obtained by flattening the parameters of a data string in order, i.e., 𝔣𝔇(u) = a_1^1 a_1^2… a_1^k_1 a_2^1 a_2^2… a_2^k_2… a_n^1 a_n^2… a_n^k_n. Given an EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ). If there exist a series of concrete transitions (q_i,b_i) (q_i+1, b_i+1), where these v_i are parameterized events, i ∈ [1:n], n ≥ 1, we define the combined concrete transition as the path (q_1,b_1) (q_n+1, b_n+1) where u = v_1v_2… v_n. The language between a set of source states Q_1⊆ Q and a set of target states Q_2⊆ Q of the EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) is defined as follows. L_Q_1^Q_2(E)= { u | ∃ q_1∈ Q_1, q_2∈ Q_2, b_1, b_2∈ Y, s.t. (q_1,b_1) (q_2, b_2) ∧ ( q_1∈ Q_0⇒ b_1∈ Y_0) }. The generated language and marked language by the EFA E are, respectively, defined as L(E) = L_Q_0^Q(E) and L_m(E) = L_Q_0^Q_m(E). The data language and marked data language of the EFA E are, respectively, defined as L_d(E) = ⋃_u ∈ L(E)𝔇(u) and L_md(E) = ⋃_u ∈ L_m(E)𝔇(u). The flat data language and flat marked data language of the EFA E are, respectively, defined as L_fd(E) = ⋃_u ∈ L(E)𝔣𝔇(u) and L_fmd(E) = ⋃_u ∈ L_m(E)𝔣𝔇(u). The EFA E shown in Fig. <ref> simulates the process of user registering in a web-site, where the user is required to provide his password twice for confirming its correctness. Suppose the charset for nickname and password are both Ω, then the domains of state and event parameters are Y=X=Ω^*. In the symbolic transition from q_0 to q_1, the user inputs his nickname, and the guard ⊤ does not block any input and the updating function Ξ does nothing. In the symbolic transition from q_1 to q_2, the user inputs his password for the first time (denoted by x_σ_2^1), and the updating function ξ is defined as y_q_2← x_σ_2^1 that means the password is stored to the target state q_2 as its parameter y_q_2. In the symbolic transitions from q_2 to q_3 and from q_2 to q_0, the user provides his password for the second time (denoted by x_σ_3^1). If these two passwords are identical (i.e., y_q_2=x_σ_3^1), then the former transition is fired, and the process goes to the final state q_3 and terminates successfully, otherwise (i.e., y_q_2≠ x_σ_3^1) the latter transition is fired, and the process fails and goes back to the initial state q_0. Note that the EFA E shown in Fig. <ref> is of 1-step-length. A more concise 3-step-length EFA with only two states and two symbolic transitions can also describe the same process. Before introducing this, we present a simplified EFAs model that has no state parameter. An Event-Parameters EFA (EP-EFA) is defined as S = (Q, Σ, X, Q_0, Q_m, T ), where Q is the finite set of states, Σ is the finite set of event tags, X is the domain of one event parameter, Q_0⊆ Q and Q_m⊆ Q are the sets of initial and marked states, respectively, and T is the set of symbolic transitions and each symbolic transition t ∈ T is of form q q where * q ∈ Q and q∈ Q are the source and target states, respectively; * k ≥ 0, the step-length of the transition, is the size of the tuple of event parameters in this transition; * σ∈Σ is the tag of the event, and if k ≥ 1, it carries a k-tuple of event parameters ⟨ x_σ^1, x_σ^2, …, x_σ^k⟩, x_σ^i∈ X, i ∈ [1:k], otherwise it carries no event parameter; * φ is the guard of transition t, and it is an X^k-predicate if k ≥ 1 otherwise the true predicate ⊤, and if event σ occurs at state q with the proper values of parameters to enable φ, then the transition t may be fired. If there are multiple transitions that can be fired at a state, then only one of them is fired nondeterministically. E is said to be deterministic, if no more than one transition can be fired synchronously at each state, i.e., for two different transitions q q_1 and q q_2, φ_1(⟨ a_1, a_2, …, a_k⟩) ∧φ_2(⟨ a_1, a_2, …, a_k⟩) does not hold for any event parameters values ⟨ a_1, a_2, …, a_k⟩[If k = 0, then φ_1 = φ_2 = ⊤ by the definition of symbolic transition. In this case, the determinism requires that there do not exist such two different transitions q q_1 and q q_2, which is actually the condition for deterministic FAs.]. Moreover, the implicit ϵ-selfloop q q can be viewed as the special 0-step-length transition q q. The step-length of S is defined as the maximum of the step-lengths of the symbolic transitions in S. The symbolic transition q q represents the set of concrete transitions {q q | ⟨ a_1, a_2, …, a_k⟩∈φ}, where σ⟨ a_1, a_2, …, a_k⟩ is the parameterized event of the concrete transition. For example, suppose X={0, 1, 2}, then the symbolic transition q q denotes the set of concrete transitions {q q, q q, q q, q q}. According to Definitions <ref> and <ref>, EP-EFAs model is just a special type of EFAs model without state parameters. This makes it impossible to keep information in the states, and thus limits the expressiveness of EP-EFAs model inevitably. The definitions of the parameterized string, data string, flat data string in EP-EFAs model are the same as themselves in EFAs model. Given an EP-EFA S = (Q, Σ, X, Q_0, Q_m, T ). If there exist a series of concrete transitions q_i q_i+1 where these v_i are parameterized events, i ∈ [1:n], n ≥ 1, we define the combined concrete transition as the path q_1 q_n+1 where u = v_1v_2… v_n. The language between the set of source states Q_1⊆ Q and the set of target states Q_2⊆ Q of the EP-EFA S is defined as follows. L_Q_1^Q_2(S)= { u | ∃ q_1, q_2∈ Q_1 s. t. q_1 q_2}. The generated language and marked language by the EP-EFA S are, respectively, defined as L(S) = L_Q_0^Q(S) and L_m(S) = L_Q_0^Q_m(S). The data language and marked data language of the EP-EFA S are, respectively, defined as L_d(S) = ⋃_u ∈ L(S)𝔇(u) and L_md(S) = ⋃_u ∈ L_m(S)𝔇(u). The flat data language and flat marked data language of the EP-EFA S are, respectively, defined as L_fd(S) = ⋃_u ∈ L(S)𝔣𝔇(u) and L_fmd(S) = ⋃_u ∈ L_m(S)𝔣𝔇(u). An EP-EFA S and an EFA E are said to be data-equivalent, if L_fmd(S) =L_fmd(E). The EP-EFA S shown in Fig. <ref> also simulates the process of user registering in a web-site. In this EP-EFA S, the event σ carries a 3-tuple of event parameters ⟨ x_σ^1, x_σ^2, x_σ^3⟩, where the first element is for user's nickname, and the second and third elements are both for user's password. Hence, if x_σ^2 = x_σ^3, then the process goes to the final state q_1 and terminates successfully, otherwise it fails and stays in state q_0. It is easy to verify that the EP-EFA S is data-equivalent to the EFA E shown in Fig. <ref>, as the parameters consumed in the transitions q_0→ q_1→ q_2→ q_0 and q_0→ q_1→ q_2→ q_3 in E are exactly the same as that consumed in the transitions q_0→ q_0 and q_0→ q_1 in S, respectively. Examples (<ref>-<ref>) show that, although the state parameter is removed, EP-EFAs model still retains a fair expressiveness by reading multiple event parameters as needed in each transition. By the definitions, the models of EFAs and EP-EFAs allow for infinite state/event spaces, while FAs model only supports finite ones. This means the parametric models are more powerful than FAs model. In Examples (<ref>-<ref>), suppose |Ω| = M and X=Ω^≤ N, then |X| = ∑_i=1^N M^i. To simulate the process of user registering in this finite space, FAs model needs at least (|X|+3) states and |X|*(|X|+2) transitions, as shown in Fig. <ref>. This suggests that even in a finite space, FAs model may be quite inefficient for certain complex systems when compared with the parametric models. There exists an EFA E that cannot be data-equivalent with any EP-EFA S_E. First of all, we construct an EFA E with X=Y=ℕ, as shown in Fig. <ref>. Obviously, E accepts even number of increasing natural numbers, i.e., the marked data string of E has the form of a_1a_2… a_2*n, where n ≥ 1, a_i+1 > a_i, i ∈ [1:(2*n-1)]. Secondly, we prove there does not exist a data-equivalent EP-EFA S_E for the EFA E by contradiction. Suppose there exists a data-equivalent EP-EFA S_E, where the number of the states is m and the step-length is K. Take a flat marked data string of S_E u=a_1a_2… a_2*n where 2*n > (m-1)*K. Suppose that u visits the sequence of states q_0→ q_1→…→ q_l in S_E, where q_0∈ Q_0 and q_l∈ Q_m. Since the step-length of S_E is K, we have l*K ≥ 2*n, and thus l>m-1. This means that there exist two states q_i, q_j in the sequence of visited states of u such that q_i = q_j and 0 ≤ i < j ≤ l, as the EP-EFA S_E has m states. Suppose that the parameters consumed from state q_i to state q_j are a_ia_i+1… a_j, 1 ≤i < j≤ 2*n. Obviously, the flat data string u = a_1… a_i-1 a_ia_i+1… a_j a_ia_i+1… a_j a_j+1… a_2*n also can be marked by S_E. Since S_E and E are data-equivalent, u is marked by E. However, it is not true, as a_j > a_i and û is not a sequence of increasing numbers. Hence, the contradiction is generated, which implies there does not exist a data-equivalent EP-EFA S_E for the EFA E shown in Fig. <ref>. For any EP-EFA S, there always exists a data-equivalent EFA E_S. It is straightforward by Definitions <ref> and <ref>. Propositions <ref> and <ref> imply that EFAs model is more expressive than EP-EFAs model. The models of EFAs and EP-EFAs extend FAs to an infinite model by means of the symbolic transitions carrying predicates over the infinite parameter space. With the help of the satisfiability modulo theories (SMT) solvers (e.g., Z3, Open SMT, MathSAT5, etc., see <cit.> for details), the data types that can be efficiently processed by parametric models include real/integer, bit vectors, arrays, difference logic, inductive data, etc. Therefore, the models of EFAs and EP-EFAs are quite expressive tools for DESs. A longer step-length adds the expressiveness of EP-EFAs. As evidence, the k-step-length transition q_1 q_2 has no equivalent series of transitions with a lower step-length. However, for the EFAs model, a longer step-length does not add its expressiveness, as the state parameter can be used to store the necessary information during the transitions. The subsequent proposition presents a formal demonstration for this fact. For any m-step-length EFA E_m, m > 1, there always exists a data-equivalent 1-step-length EFA E_1. Given any m-step-length EFA E_m, we construct the data-equivalent 1-step-length EFA E_1 as follows. For each symbolic transition (q,y_q ) (q, y_q) of E_m, 1 < k ≤ m, we add (k-1) new states: q^i, i∈ [1:k-1], and k events σ^j, j ∈ [1:k], and then construct a chain of k 1-step-length transitions q^j-1 q^j where q^0 = q and q^k = q to replace the transition q q. Specifically, the update functions are defined as follows: ξ^1def= [ y_q^1(1) ← x_σ^1^1 ] and for j ∈ [2:k-1], ξ^jdef= [ y_q^j(1) ← y_q^j-1(1); …; y_q^j(j-1) ← y_q^j-1(j-1); y_q^j(j) ← x_σ^j^1 ] where y_q^j(i) means the i^th element of the state parameter of q^j, and ξ^kdef=ξ(x_σ^1/y_q(1), …, x_σ^k-1/y_q(k-1), x_σ^k/x_σ^k^1), where the “A/B" denotes the substituting A by B in function ξ. The predicates are as follows: φ^i = ⊤ for i∈ [1:k-1], and φ^k = φ(x_σ^1/y_q(1), …, x_σ^k-1/y_q(k-1), x_σ^k/x_σ^k^1) where the “A/B" denotes the substituting A by B in the predicate. Obviously, φ^k is a (Y × X)-predicate where Y = X^k-1. The intuitive meaning of these new transitions is as follows. Each new transition is responsible for transmitting state parameters from source state to target state and storing one event parameter to target state parameter; and the first (k-1) transitions are guarded with ⊤ and the last one is guarded with φ^k that is equivalent with φ. In addition, ξ^k is also equivalent with ξ. This means that for any k event parameters, the transition (q,y_q ) (q, y_q) is fired if and only if the chain of transitions is fired, and meanwhile the parameter of the final state q is also updated in the same way. Thus, by replacing each transition of E_m with such a chain of transitions, we can obtain the data-equivalent 1-step-length E_1. § PROBLEM FORMULATION AND ASSUMPTIONS In this section, we present some assumptions and then formulate the problems discussed in this paper. In rest of this paper, we focus on the problem of opacity analysis for a parametric DES modeled by an EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) or an EP-EFA S = (Q, Σ, X, Q_0, Q_m, T ). In the following, the parametric DES is denoted by G, and the notation L_Q_1^Q_2(G) is the language calculated by Equation (<ref>) when G is an EFA, and by Equation (<ref>) when G is an EP-EFA. The basic assumptions in this paper are as follows. * Assumption 1: The secret and non-secret behavior of the parametric system can be coded into its state space. We consider the following two cases: 1) the secret and non-secret behavior are the sets of data strings arriving in the given secret states Q_s⊆ Q and non-secret states Q_ns⊆ Q, respectively, and Definitions <ref> and <ref> are of this case; 2) the secret and non-secret behavior are the sets of data strings originating from the given secret initial states Q_s⊆ Q_0 and non-secret initial states Q_ns⊆ Q_0, respectively, and Definition <ref> is of this case. * Assumption 2: The intruder knows the complete structure of the parametric DES G, and he can observe the data exchanges between the system and its environment during the interactions (i.e., data language L_d(G)) through a static observation function θ. The observation function θ is defined as: for any data string d = ⟨ a_1^1a_1^2… a_1^k_1⟩ ⟨ a_2^1a_2^2… a_2^k_2⟩…⟨ a_j^1a_j^2… a_j^k_j⟩∈ L_d(G), θ(d) = ⟨θ(a_1^1)θ(a_1^2) …θ(a_1^k_1) ⟩⟨θ(a_2^1)θ(a_2^2) …θ(a_2^k_2) ⟩ …⟨θ(a_j^1)θ(a_j^2) …θ(a_j^k_j) ⟩ where θ(a_m^n) = a_m^n, if ϑ(a_m^n) holds, ϵ, otherwise, and ϑ is the X-predicate describing the observable condition for data elements, and the empty observation “⟨ϵ⟩" in θ(d) can be removed directly. The set of observations for G is defined as Θ(G) = ⋃_u ∈ L(G)θ(𝔇(u)). An observable unit of the observation w, w ∈Θ(G), is the substring of form “⟨ a_ia_i+1… a_i+k⟩", k ≥ 0, in w. Let |w|_u denote the number of observable units in w. According to the definition, the observations such as “⟨ a_1a_2⟩⟨ a_3⟩" and “⟨ a_1⟩⟨ a_2a_3⟩" are considered to be different. Two identical observations have the same number of observable units and the corresponding units are equal to each other. Therefore, an observable unit is regarded as a minimal information structure acquired by the intruder, and this paper considers the data language rather than the flat data language in opacity analysis. The main reasons for this treatment are as follows. 1) Since each parameter tuple is transmitted between the system and its environment as a whole and the observable unit is the observable part of parameter tuple, the intruder will obtain each observable unit as a whole. 2) Similar to the literature of opacity analysis <cit.>-<cit.>, this paper also assumes that intruders have sufficient memory and computation capabilities to keep the history of the observations and update the state estimation for the system instantaneously by their latest observations. Based on these assumptions, we present three opacity properties for parametric DESs in the following. (current-state opacity) Given the parametric DES G with the set of secret states Q_s⊆ Q, the set of non-secret states Q_ns⊆ Q, and the observation function θ. G is said to be current-state opaque w.r.t. Q_s, Q_ns and θ, if (∀ u ∈ L^Q_s_Q_0(G)) (∃ v ∈ L^Q_ns_Q_0(G)) θ(𝔇(u))=θ(𝔇(v)). (initial-state opacity) Given the parametric DES G with the set of secret initial states Q_s⊆ Q_0, the set of non-secret initial states Q_ns⊆ Q_0, and the observation function θ. G is said to be initial-state opaque w.r.t. Q_s, Q_ns and θ, if (∀ u ∈ L^Q_Q_s(G)) (∃ v ∈ L^Q_Q_ns(G)) θ(𝔇(u))=θ(𝔇(v)). (infinite-step opacity) Given the parametric DES G with the set of secret states Q_s⊆ Q, the set of non-secret states Q_ns⊆ Q, and observation function θ. G is said to be infinite-step opaque w.r.t. Q_s, Q_ns and θ, if (∀ uu∈ L_Q_0^Q(G): u ∈ L^Q_s_Q_0(G)) (∃ vv∈ L_Q_0^Q(G) : v ∈ L^Q_ns_Q_0(G)) [θ(𝔇(u))=θ(𝔇(v)) ∧θ(𝔇(u))=θ(𝔇(v))]. The opacity properties of parametric DESs presented in Definitions <ref>, <ref>, <ref> have the same intuitive meanings as their counterparts of the classic DESs. We would investigate the opacity properties for EFAs and EP-EFAs in Sections IV and V, respectively. § UNDECIDABILITY OF OPACITY IN EFAS In this section, we prove that the opacity properties presented in Definitions <ref>, <ref>, <ref> for EFAs are all undecidable in general. The main idea of the proof is reducing the halting problem of two-counter machines to the verification of the opacity properties. A counter machine is an abstract machine used to model computation in formal logic and theoretical computer science. A counter machine consists of several registers, each of which only can store an integer number, and a set of arithmetic operations and control instructions. Minsky introduced a type of counter machines including two registers r_j, j ∈{1, 2}, and three instructions: INC(r_j), DEC(r_j) and JZ(r_j, z) with the semantics of r_j← r_j + 1, r_j← r_j - 1, and goto(z) if r_j=0, respectively <cit.>. This kind of machines is usually called Two-Counter Machines (2CMs) in the literature. 2CMs are Turing equivalent <cit.>. It is well known that the halting problem for Turing machines is undecidable. Therefore, by Lemma 1, we have the following result. The halting problem of 2CMs is undecidable. Obviously, a configuration of a 2CM with program P can be described as a triple (r_1,r_2, c ) ∈ℕ^3, where r_1 and r_2 keep the values of the first and second registers, respectively, and c keeps the value of program counter. Let x(j) denote the j^th entry of the configuration x∈ℕ^3, j∈ [1:3]. Let |P| denote the number of instructions in program P. Firstly, we formulate the (ℕ^3×ℕ^3)-predicate φ^step that characterizes the configuration evolution of the 2CM with program P after executing a single instruction, where the first and second elements refer to the current and subsequent configurations, respectively. Let φ_i be the (ℕ^3×ℕ^3)-predicate describing the relation of the configurations before and after the executing of the i^th instruction of program P. We formulate φ_i according to the type of the i^th instruction as follows. * If the i^th instruction is INC(r_j), j ∈{1,2}, then φ_i(y, x) def= [(x(j) = y(j) + 1) ∧ (x(3-j) = y(3-j)) ∧ (y(3) = i ) ∧ (x(3) = i + 1)], where the first clause means that the j^th register increases by 1, the second clause means the other register remains unchanged, the third and fourth clauses mean that the program is executing the i^th instruction and the next instruction to be executed is the (i+1)^th one, respectively. * If the i^th instruction is DEC(r_j), j ∈{1,2}, then φ_i(y, x) def= [(x(j) = y(j) - 1) ∧ (x(3-j) = y(3-j)) ∧ (y(3) = i ) ∧ (x(3) = i + 1)]. The intuitive meaning of this equation is similar to that of the previous one. * If the i^th instruction is JZ(r_j,z), j ∈{1,2}, then φ_i(y, x) def= [(x(1) = y(1) ) ∧ (x(2) = y(2)) ∧ (y(3) = i ) ∧ (x(3) = ( y(j) = 0 ? z: i+1 ))], where the first and second causes mean that both the registers remain unchanged, the third cause means that the program is executing the i^th instruction, and the last cause adopts a Java-language-style expression to describe the if-then-else term, i.e., if the register r_j equals 0, then the next instruction to be executed is the z^th one, otherwise the (i+1)^th one. Hence, we obtain the special predicate φ^step for program P as follows. φ^step(y, x) def=⋁_i ∈ [1:|P|]φ_i(y, x). The (ℕ^3×ℕ^3)-predicate φ^eq describing whether two configurations are equal to each other or not is defined as follows. φ^eq(y, x) def=⋀_i∈{1,2,3}[(x(i) = y(i) )] Obviously, φ^step and φ^eq are both predicates in the Boolean algebra 𝒜=(ℕ^3×ℕ^3, Ψ, ∙). For the specific program P, we denote by ℕ^3-predicates φ^ini and φ^fin its initial configuration and final configuration, respectively. Based on the above discussions, we prove that the current-state opacity of EFAs is undecidable by constructing a special parametric DES E_P w.r.t program P and reducing the halting problem of P to the verification of current-state opacity of E_P. The current-state opacity of EFAs is undecidable in general. Firstly, we construct the EFA E_P = { Q={ q_0,q_1, q_2,q_3}, Σ = {σ_1, σ_2, σ_3, σ_4}, X = ℕ^3, Q_0={q_0}, Y = ℕ^3, Y_0=φ^ini , R } w.r.t. a 2CM with program P (shown in Fig. <ref>). The predicates of φ^step and φ^eq are defined in Equations (<ref>) and (<ref>), respectively. The predicates of φ^ini and φ^fin, as the logic characterization for the initial and final configurations of program P, respectively, are X-predicates and also can be regarded as special (Y× X)-predicates where the first variable (i.e., state parameter) has no influence to the predicates. In the symbolic transitions, the update function ξ^sto just stores the event parameter to the target state as parameter, e.g., ξ^sto in the transition from q_0 to q_1 is defined as: y_q_1← x_σ_1^1. Let the set of secret states be Q_s = { q_0,q_1,q_2} and the set of non-secret states be Q_ns = { q_3}. Consider the observation function θ: ∀ u ∈ (ℕ^3)^*, θ(u) = ϵ. According to Definition <ref>, the parametric DES E_P is current-state opaque if and only if the non-secret behavior is non-empty, i.e., the state q_3 is reachable from the initial state q_0. According to Fig. <ref>, the data strings (i.e., the sequence of configurations) that can reach the state q_3 from the initial state q_0 have the form of v = a_1a_2… a_2*na_2*n+1, n ≥ 1, and q_3 is reachable if and only if v satisfies the following formulae: a_1∈φ^ini, a_2*n+1∈φ^fin, a_2*j+1∉φ^fin, j ∈ [1:n-1], and for i ∈ [1:n], (a_2*i-1,a_2*i) ∈φ^step and (a_2*i,a_2*i+1) ∈φ^eq. For such sequence v satisfying aforementioned formulae, there exists a one-to-one corresponding sequence w=a_1a_2a_4 … a_2*(n-1)a_2*n, n ≥ 1, where a_1 and a_2*n are, respectively, the initial and final configurations, and each pair of adjacent configurations satisfies the predicate φ^step. This means that w is exactly the evolution sequence of configurations during the execution of program P, i.e., the 2CM with program P halts if and only if there exists such sequence w. By Lemma <ref>, the halting problem of 2CMs is undecidable, which implies the undecidability of the existence of such w, and further implies the undecidability of the existence of such v. Hence, the reachability of state q_3 in E_P is undecidable, and so is the current-state opacity of E_P. Therefore, the current-state opacity of EFAs is undecidable in general. The initial-state opacity of EFAs is undecidable in general. First of all, we construct the EFA E_P = { Q={ q_0, …,q_4}, Σ = {σ_1, … ,σ_5}, X=ℕ^3, Q_0={q_0,q_4}, Y=ℕ^3, Y_0=φ^ini , R } for a 2CM with program P. In EFA E_P, the predicates φ^ini, φ^eq, φ^step and φ^fin, and update function ξ^sto have the same definitions as themselves in E_P (shown in Fig. <ref>). Let the set of secret initial states be Q_s = { q_4} and the set of non-secret initial states be Q_ns = { q_0}. Consider the observation function θ: θ(u) = u, u ∈ (ℕ^3)^*. Under these settings, we have the following fact. ⋃_v ∈ L_Q_s^Q(E_P)θ(𝔇(v)) = ⋃_v ∈ L_{q_4}^{q_4}(E_P)θ(𝔇(v)) = (ℕ^3)^*. That is, the set of observations for secret behavior is the universal set (ℕ^3)^*. According to Definition <ref>, E_P is initial-state opaque if and only if the set of the observations for non-secret behavior is also the universal set (ℕ^3)^*, i.e., ⋃_u ∈ L_Q_ns^Q(E_P)θ(𝔇(u)) = ⋃_u ∈ L_{q_0}^{q_0,q_1,q_2,q_3}(E_P)θ(𝔇(u)) = (ℕ^3)^*. In order to investigate the validness of Equation (<ref>), we construct a new EFA E_P from E_P by removing state q_4 and its corresponding transitions, and adding a state q_5 and two corresponding transitions (i.e., the transitions denoted by dotted-arrow in Fig. <ref>). In the new EFA E_P, we have the fact that the disjunction of the predicates in the transitions originating the same state is equal to the true predicate ⊤, e.g., for state q_2, (φ^fin∧φ^eq ) ∨ (φ^fin∧φ^eq) ∨ ( φ^eq) = ⊤. Hence, we have the fact that ⋃_u ∈ L_{q_0}^{q_0,q_1,q_2,q_3,q_5}(E_P)θ(𝔇(u)) = (ℕ^3)^*. According to Equation (<ref>), it is obvious that Equation (<ref>) holds if and only if the state q_5 is not reachable in E_P. Notice that the reachability of q_5 in E_P is identical to the reachability of q_3 in E_P (shown in Fig. <ref>), which has been proved to be undecidable in Theorem <ref>. Hence, the validness of Equation (<ref>) is undecidable, and so is the initial-state opacity of EFA E_P. Therefore, the initial-state opacity of EFAs is undecidable in general. The infinite-step opacity of EFAs is undecidable in general. We consider the same EFA E_P with the same secret states, non-secret states and the observation function as that in Theorem <ref>. By Definition <ref>, E_P is infinite-step opaque if and only if the state q_3 is reachable from the initial state q_0, which has been proved to be undecidable in Theorem <ref>. Therefore, infinite-step opacity of E_P is undecidable, and infinite-step opacity of EFAs is undecidable in general. As mentioned before, EFAs model is a quite powerful tool to simulate the interactions between a system and its environment. However, the coexistence of event and state parameters in the predicates complicates this model and make the properties of opacity undecidable. Hence, it is necessary to consider the EP-EFAs model where the state parameter is removed. § OPACITY OF EP-EFAS In this section, we investigate the current-state opacity, initial-state opacity and infinite-step opacity of EP-EFAs. We present the verification algorithms for these opacity properties firstly, and then analyze the complexity of these algorithms. §.§ Current-State Opacity of EP-EFAs In fact, Definition <ref> implies that the current-state opacity holds if and only if for any observation, the intruder cannot determine the system is in the secret states. For the convenience of demonstrating this issue, we present the following notion. Given the EP-EFA S= (Q, Σ, X, Q_0, T), the state estimation function Est^S: Θ(S) → 2^Q is defined as follows: for any observation w ∈Θ(S), Est^S(w) = { q ∈ Q | ∃ q_0∈ Q_0, u ∈ L(S), s.t. q_0 q ∧ w = θ(𝔇(u)) }. For classic DESs, the state estimations can be calculated by constructing a special automaton: observer <cit.>. Inspired by this idea, we present an algorithm (Algorithm <ref>) to construct the symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs} for the EP-EFA S= (Q, Σ, X, Q_0, T). The symbolic observer Obs(S) is a special EP-EFA without event tags. In the following, we would like to prove that the verification of current-state opacity for the EP-EFA S can be realized by means of its symbolic observer Obs(S). Firstly, we present three necessary Lemmas. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. S is current-state opaque w.r.t. Q_s, Q_ns and θ, if and only if for any observation w ∈Θ(S), Est^S(w) ∩ Q_s≠∅ ⇒ Est^S(w) ∩ Q_ns≠∅. (⇐) Given any u ∈ L_Q_0^Q_s(S). Let w = θ(𝔇(u)). This implies Est^S(w) ∩ Q_s≠∅. Thus, we have Est^S(w) ∩ Q_ns≠∅, which means that there exist q_0∈ Q_0, a non-secret state q∈ Q_ns and a parameterized string v, such that q_0q, and w = θ(𝔇(v)). This further implies that v ∈ L_Q_0^Q_ns(S) and θ(𝔇(u)) = θ(𝔇(v)). According to Definition <ref>, S is current-state opaque. (⇒) Given any observation w ∈Θ(S) satisfying Est^S(w) ∩ Q_s≠∅. Est^S(w) ∩ Q_s≠∅ means there exists a parameterized string u ∈ L_Q_0^Q_s(S) such that w = θ(𝔇(u)). Since S is current-state opaque, there exists v ∈ L_Q_0^Q_ns(S) such that θ(𝔇(v)) = θ(𝔇(u)) = w. This means that there exist q_0∈ Q_0 and q∈ Q_ns such that q_0q, which implies that q∈ Est^S(w). Thus Est^S(w) ∩ Q_ns≠∅. Lemma <ref> implies that the verification of current-state opacity can be realized by going through all the possible state estimations. The following two Lemmas further prove that the states of the symbolic observer are exactly all the state estimations. Given an EP-EFA S = (Q, Σ, X, Q_0, T) and its symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs} constructed by Algorithm <ref>. For any observation w ∈Θ(S), Est^S(w) is the state reachable from q^obs_0 by w in Obs(S). Firstly, we claim that Obs(S) constructed by Algorithm <ref> is deterministic, i.e., given an observation, there exists only one reachable state in Q^obs. This is because Equation (<ref>) implies that if idx1 ≠ idx2, then ψ_idx1∧ψ_idx2 =, and thus no observation unit can simultaneously satisfy two different symbolic transitions originating from the same state q^obs of Obs(S). Secondly, we prove this Lemma by induction on the number of observation units in w. Let |w|_u = n. The base case is n = 0, i.e., w = ϵ. It is sufficient to show Est^S(ϵ) = q^obs_0. If q ∈ Q_0, obviously we have q ∈ Est^S(ϵ) and q ∈ q^obs_0. The remainder is to show Est^S(ϵ) \ Q_0 = q^obs_0\ Q_0. According to Equations (<ref>-<ref>), q q∈T means there exists a symbolic transition q q in S, such that φ holds for certain k unobservable event parameters or k=0. Therefore, by Equation (<ref>), a state q_n∈ q^obs_0\ Q_0, if and only if there exist a sequence of transitions q_0q_1…q_n in S, q_0∈ Q_0, where each predicate φ_i holds for k_i, i ∈ [1:n], unobservable event parameters or k_i = 0. This is equivalent to saying that there exists a parameterized string u, θ(𝔇(u)) = ϵ, and q_0 q_n by Definition <ref>, which also means that q_n∈ Est^S(ϵ) by Equation (<ref>). Thus the base case holds. The induction hypothesis is that for all observation w, |w|_u≤ n, Est^S(w) is reachable by w in Obs(S). We need to show that for any observation unit w = ⟨ a_1… a_k⟩, k ≥ 1, such that ww∈Θ(S), Est^S(ww) is reached by ww from q^obs_0 in Obs(S). This is equivalent to show that Est^S(ww) is reachable by w from state Est^S(w) due to the fact that the observer Obs(S) is deterministic. Since the observation function θ is static, we can reformulate Est^S(ww) as follows. Est^S(ww) = {q | q q∧ q ∈ Est^S(w) ∧w = θ( 𝔇(u)) }. Taking Est^S(w) as the q^obs in Equation (<ref>), then T^k_Est^S(w) is the set of observable transitions that originate from one of the states in Est^S(w) and contain k observable parameters. Suppose idx⊆ [1:|T^k_Est^S(w)|] is the only nonempty index set such that the observation unit w = ⟨ a_1… a_k⟩ satisfies ψ_idx (the existence follows from the fact that ww∈Θ(S) and the uniqueness follows from Equation (<ref>)). According to Equations (<ref>,<ref>), we obtain Est^S(ww) = q^obs. By Algorithm <ref>, we have Est^S(w) q^obs∈ T^obs, and thus Est^S(w) Est^S(ww) ∈ T^obs, which implies Est^S(ww) is reached from q^obs_0 by ww in Obs(S). This completes the proof of the induction step. Given an EP-EFA S = (Q, Σ,X, Q_0, T) and its symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs}. We have L(Obs(S)) = Θ(S). We prove this Lemma by induction on the number of observation units in w ∈ L(Obs(S)). Let |w|_u = n. The base case is n =0, i.e., w = ϵ. Obviously ϵ∈ L(Obs(S)) and ϵ∈Θ(S). Thus the base case holds. The induction hypothesis is that w ∈ L(Obs(S)) ⇔ w ∈Θ(S) holds for any observation w, |w|_u≤ n. Then we need to show for each observation unit w=⟨ a_1,…,a_k⟩, ww∈ L(Obs(S)) ⇔ ww∈Θ(S). Suppose q^obs is reached by w from q^obs_0 in Obs(S). By Lemma (<ref>) and Equation (<ref>), for each q_i∈ q^obs, there exists an initial state q_0^i∈ Q_0 such that q_0^i q_i, θ(𝔇(u_i)) = w. By Equations (<ref>-<ref>), ww∈ L(Obs(S)) holds if and only if there exists a nonempty index set idx such that ψ_idx holds for w. This is equivalent to saying that there exists at least an observable transition (t_i=q_iq_i) ∈T_q^obs^k, q_i∈ q^obs, w∈φ_i, i ∈idx, which further means there exists a parameterized event u_i such that q_iq_i and θ(𝔇(u_i)) = w by Equations (<ref>, <ref>). Therefore, ww∈ L(Obs(S)) holds if and only if there exists u_i such that q^i_0q_i, θ(𝔇(u_i)) = w, θ(𝔇(u_i)) = w, which means ww∈θ(𝔇(u_iu_i)) ∈Θ(S). This completes the proof of the induction step. Lemma <ref> implies that the state estimation for each observation is contained in the state space of the symbolic observer Obs(S). Lemma <ref> further implies that only the observations can reach the states of Obs(S). Hence, the state space of Obs(S) are exactly all the state estimations of S. Therefore, by Lemmas (<ref>, <ref>, <ref>), we have the following theorem. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. Let Obs(S) = { Q^obs, q^obs_0, T^obs} be the symbolic observer constructed by Algorithm <ref>. S is current-state opaque w.r.t. Q_s, Q_ns and θ, if and only if for any q^obs∈ Q^obs, q^obs∩ Q_s≠∅⇒ q^obs∩ Q_ns≠∅. The verification of current-state opacity and the construction of the symbolic observer (Algorithm <ref>) have the same complexity, as checking the validness for Equation (<ref>) can be finished during the construction of Obs(S). Suppose the EP-EFA S= (Q, Σ, X, Q_0, T) with K step-length has N states and M symbolic transitions. Assume that g(z) is the cost of checking satisfiability of the predicate with z free variables in the Boolean algebra. In Step 1) of Algorithm <ref>, we have |T| ≤ M*(K+1), and for each symbolic transition with l step-length, l+1 predicates are checked for satisfiability. Thus, the complexity of Step 1) is at most M*(K+1)*g(K). In Step 2), for each T^k_q^obs, there are 2^|T^k_q^obs|-1 combined predicates that need to be checked for satisfiability. Hence, there are at most ∑_q^obs∈ Q^obs∑_k=1^K(2^|T^k_q^obs|-1) predicates are checked for satisfiability. For a given q^obs, we have ∑_k=1^K|T^k_q^obs| ≤ |T|, and by this equation, we can prove ∑_k=1^K (2^|T^k_q^obs|-1) < 2^|T|≤ 2^M*(K+1). Since |Q^obs| ≤ 2^N, the complexity of Step 2) of Algorithm <ref> is at most g(K) * 2^N*2^M*(K+1). Therefore, the complexity of the verification of current-state opacity is g(K) * 2^N+M*K. As aforementioned, the EP-EFAs model can address many complex data and operations via the symbolic transitions. However, for the simplicity to demonstrate the obtained results, the following illustrative examples only consider integer arithmetic. Consider an EP-EFA S with X = ℕ shown in Fig. <ref>, where the set of secret states Q_s = {q_2} and the set of non-secret states Q_ns = Q\ Q_s. Suppose that the observation function θ is obtained by the X-predicate ϑ(x) def= [x ≥ 5 ]. Firstly, we construct the observable transitions as follows. T_t_1= { q_0 q_1}. T_t_2= { q_0 q_3; q_0 q_3}. T_t_3= { q_1 q_2 ; q_1 q_2}. T_t_4= { q_3 q_4 ; q_3 q_4}. T_t_5= { q_2 q_2 ; q_2 q_2}. T_t_6= { q_4 q_4 ; q_4 q_4}. Secondly, we have q^obs_0 = {q_0, q_1}, and obtain the corresponding set as follows. T^2_q^obs_0 = { q_0 q_3 ; q_1 q_2}. T^1_q^obs_0 = { q_1 q_2; q_0 q_3}. For T^2_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^2_q^obs_0) = {ψ_{1} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_2≠ x_1 +1]; ψ_{1,2} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_2 = x_1 +1] }. Through the transitions guarded with ψ_{1} and ψ_{1,2}, the states { q_3, q_4} and {q_2, q_3, q_4} are, respectively, generated and put into Q^obs. In addition, the corresponding symbolic transitions are put into T^obs. For T^1_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^1_q^obs_0) = {ψ_{1} = [x_1 = 5 ]; ψ_{2} = [x_1 > 5 ]}. Through the transitions guarded with ψ_{1} and ψ_{2}, the states { q_2}, { q_3, q_4} are, respectively, generated and the former is put into Q^obs. Meanwhile, the corresponding symbolic transitions are put into T^obs. For other unvisited states in Q^obs, we do the same things as that for state q^obs_0. Finally, we obtain the symbolic observer, as shown in Fig. <ref>, where Q^obs = {{ q_0,q_1}, {q_2}, { q_3,q_4}, { q_2,q_3,q_4}, {q_4}, { q_2,q_4}}. For the state {q_2}∈ Q^obs, we have { q_2}∩ Q_s≠∅ and { q_2}∩ Q_ns = ∅. Hence by Theorem <ref>, S is not current-state opaque w.r.t. {q_2}, {q_0,q_1,q_3,q_4} and θ. §.§ Initial-State Opacity of EP-EFAs The coding manners of the secret behavior in current-state opacity and initial-state opacity are reverse. According to this property, we transform the verification of initial-state opacity into the verification of current-state opacity for EP-EFAs. Firstly, we define the reverse operations for parameterized strings, data strings and symbolic transitions. Given a parameterized string u = σ_1⟨ a_1^1, a_1^2, …, a_1^k_1⟩ σ_2⟨ a_2^1, a_2^2, …, a_2^k_2⟩ … σ_n⟨ a_n^1, a_n^2, …, a_n^k_n⟩, its reverse is u^rdef=σ_n⟨ a_n^k_n, …, a_n^2, a_n^1⟩ … σ_2⟨ a_2^k_2, …, a_2^2,a_2^1⟩ σ_1⟨ a_1^k_1, …, a_1^2, a_1^1⟩. For a data string d = 𝔇(u), the reverse of d is d^rdef=𝔇(u^r). For a symbolic transition t = q q, the reverse of t is defined as t^rdef=q q, where the predicate φ^r is obtained from φ by changing the name of the free variable x^i_σ to x^k+1 -i_σ, i ∈ [1:k], e.g., the reverse of the X^4-predicate φ = [x^1_σ > x^3_σ∧ x^2_σ≠ x^4_σ] is φ^rdef= [x^4_σ > x^2_σ∧ x^3_σ≠ x^1_σ]. By the aforementioned definitions, we have d ∈φ if and only if d^r∈φ^r. Given an EP-EFA S = (Q, Σ, X, Q_0, T). The reverse of S is defined as S^r = (Q, Σ, X, Q_0^r, T^r), where the set of initial states is Q_0^r = Q and the set of symbolic transitions is T^r = {t^r|t∈ T}. Definition <ref> generalizes the notion of reverse automata <cit.>, which has been widely used in many fields. In particular, by constructing the observer for reverse finite automata, Wu et al. <cit.> proposed an approach to verify the initial-state opacity for classic DESs. The following proposition follows from the definitions of the reverse operations, symbolic transitions, languages and observations. Given a transition t, a parameter tuple d, a parameterized string u, an observation w, and an EP-EFA S = (Q, Σ, X, Q_0, T) and its reverse S^r = (Q, Σ, X, Q_0^r, T^r). The following equations hold. 1) d ∈ prd(t) ⟺ d^r∈ prd(t^r), where the pdc(t) and pdc(t^r) denote the predicates of t and t^r, respectively. 2) q q⟺q q. 3) u ∈ L_Q_1^Q_2(S) ⟺ u^r∈ L_Q_2^Q_1(S^r). 4) w = θ(𝔇(u)) ⟺ w^r = θ(𝔇(u^r)). Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret initial states Q_s⊆ Q_0, the set of non-secret initial states Q_ns⊆ Q_0, and observation function θ. The reverse of S is S^r = (Q, Σ, X, Q_0^r, T^r) where Q_0^r = Q. S is initial-state opaque w.r.t. Q_s, Q_ns and θ, if and only if S^r is current-state opaque w.r.t. Q_s, Q_ns and θ. By Definition <ref>, S is initial-state opaque w.r.t. Q_s, Q_ns and θ, if and only if (∀ u ∈ L^Q_Q_s(S)) (∃ v ∈ L^Q_Q_ns(S)) θ(𝔇(u))=θ(𝔇(v)). This is equivalent to (∀ u^r∈ L^Q_s_Q(S^r)) (∃ v^r∈ L^Q_ns_Q(S^r)) θ(𝔇(u^r)) = θ(𝔇(v^r)) by Proposition <ref>. This means S^r is current-state opaque w.r.t. Q_s, Q_ns and θ according to Definition <ref>. Theorem <ref> implies that the verification of initial-state opacity can be efficiently reduced to the verification of current-state opacity. Since the reverse SPA-EFA S^r has the same scale as S, the complexity of the verification of initial-state opacity is also g(K) * 2^N+M*K. Consider the EP-EFA S shown in Fig. <ref> with the same observation function θ as that in Example <ref>. Suppose the set of initial states is Q_0 = {q_0, q_1, q_2}, and the secret initial states and non-secret initial states are Q_s = { q_2} and Q_ns = { q_0, q_1}, respectively. For the reverse SPA-EFA S^r = (Q, Σ, X, Q, T^r), we construct the symbolic observer Obs(S^r) = { Q^obs_r, q^obs_0, T^obs_r} according to Algorithm <ref>. For the initial state of the observer q^obs_0 = Q, we obtain the subsets of observable transitions as follows. T^2_q^obs_0 = { q_3 q_0 ; q_2 q_1}. T^1_q^obs_0 = { q_2 q_1; q_3 q_0; q_4 q_3; q_2 q_2; q_4 q_4}. For T^2_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^2_q^obs_0) = {ψ_{1} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_1≠ x_2 +1]; ψ_{1,2} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_1 = x_2 +1] }. Through the transitions guarded with the above predicates, the states { q_0} and {q_0, q_1} are generated and put into Q^obs_r. For T^1_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^1_q^obs_0) = {ψ_{1,3,4,5} = [x_1 = 5 ]; ψ_{2,3,4,5} = [x_1 = 6 ]; ψ_{2,4,5} = [x_1≥ 7 ]; }. Through the transitions guarded with the above predicates, the states { q_0, q_1, q_2, q_3,q_4} and { q_0, q_2, q_3, q_4} are generated and put into Q^obs_r. Similarly, we handle other unvisited states in Q^obs_r, and obtain the symbolic observer Obs(S^r), shown in Fig. <ref>, where Q^obs_r = {{q_0,q_1,q_2,q_3,q_4}, {q_0,q_2,q_3,q_4}, {q_0,q_1}, {q_0}}. Notice that for each state q^obs_r∈ Q^obs_r, q^obs_r∩ Q_s≠∅ always implies q^obs_r∩ Q_ns≠∅, thus S^r is current-state opaque w.r.t. {q_2}, {q_0,q_1} and θ. By Theorem <ref>, S is initial-state opaque w.r.t. {q_2}, {q_0,q_1} and θ. §.§ Infinite-Step Opacity of EP-EFAs Yin et al. <cit.> presented an ingenious method to verify the infinite-step opacity of FAs by combining the observers of the obverse and reverse automata (called two-way observers in <cit.>). Following this idea, we have the theorem as follows. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. The reverse of S is S^r = (Q, Σ, X, Q_0^r, T^r) where Q_0^r = Q. S is infinite-step opaque w.r.t. Q_s, Q_ns and θ, if and only if (∀ w ∈Θ(S))(∀w^r∈Θ(S^r)) [Est^S(w) ∩ Est^S^r(w^r) ∩ Q_s ≠∅⇒ Est^S(w) ∩ Est^S^r(w^r) ∩ Q_ns≠∅]. By Equation (<ref>), Est^S(w) and Est^S^r(w^r) are {q∈ Q | q_0q∧ q_0∈ Q_0∧ w = θ(𝔇(u))} and {q∈ Q | q q∧ q ∈ Q ∧w^r = θ(𝔇(u^r))}, respectively, and the latter further implies that {q∈ Q | q q ∧ q ∈ Q ∧w = θ(𝔇(u))} by Proposition <ref>. Therefore, Est^S(w) ∩ Est^S^r(w^r) ∩ Q^s and Est^S(w) ∩ Est^S^r(w^r) ∩ Q^ns, respectively, are equivalent to A={ q∈ Q_s | q_0q∧q q ∧ q_0∈ Q_0∧ w = θ(𝔇(u)) ∧w = θ(𝔇(u)) }, and B={ q∈ Q_ns | q_0q∧q q ∧ q_0∈ Q_0∧ w = θ(𝔇(v)) ∧w = θ(𝔇(v)) }. To complete the proof, it is sufficient to show the equivalence between Equations (<ref>) and (<ref>). Firstly, we prove that Equation (<ref>) implies Equation (<ref>). For any uu∈ L(S) satisfying u ∈ L_Q_0^Q_s(S), let w_1 = θ(𝔇(u)), w_1 = θ(𝔇(u)), and then we have w_1∈Θ(S) and w_1^r∈Θ(S^r). Taking the w_1 and w_1 here as the w and w in Equation (<ref>), then we have A ≠∅. By Equation (<ref>), we have B ≠∅. which implies that there exists vv∈ L(S) satisfying v∈ L_Q_0^Q_ns(S), such that θ(𝔇(u)) = θ(𝔇(v)) and θ(𝔇(u)) = θ(𝔇(v)). This means that Equation (<ref>) holds. Secondly, we prove Equation (<ref>) implies Equation (<ref>). For any w ∈Θ(S) and w^r∈Θ(S^r) satisfying A ≠∅, we have uu∈ L(S), such that u ∈ L^Q_s_Q_0(S), w = θ(𝔇(u)) and w = θ(𝔇(u)). By Equation (<ref>), there exists vv∈ L(S) such that v ∈ L^Q_ns_Q_0(S), θ(𝔇(u))=θ(𝔇(v)) and θ(𝔇(u))=θ(𝔇(v)), which implies B ≠∅. Therefore, Equation (<ref>) holds. According to Lemmas <ref>, <ref>, the state space of the observer of an EP-EFA are exactly the set of state estimations. By Theorem <ref>, the verification of infinite-step opacity can be realized by going through the state spaces of Obs(S) and Obs(S^r). Hence, we have the following algorithm (Algorithm <ref>) to verify the infinite-step opacity of EP-EFAs. As discussed before, the complexity of step 1) and step 2) of Algorithm <ref> is g(K) * 2^N+M*K. Since |Q^obs| ≤ 2^N and |Q^obs_r| ≤ 2^N, the complexity of Step 3) of Algorithm <ref> is 4^N. Therefore, the complexity of the verification of infinite-step opacity is g(K) * 2^N+M*K + 4^N. Consider the EP-EFA shown in Fig. <ref>, where the set of secret states Q_s ={ q_3} and the set of non-secret states Q_ns = { q_4}. The Obs(S) and Obs(S^r) have been calculated in Examples <ref> and <ref>, as shown in Fig. <ref> and Fig. <ref>, respectively. Notice that q^obs∩ q^obs_r∩{q_3}≠∅ implies q^obs∩ q^obs_r∩{q_4}≠∅ for all the pairs of states (q^obs,q^obs_r) ∈ Q^obs× Q^obs_r. Therefore, S is infinite-step opaque w.r.t. {q_3}, {q_4} and θ. § CONCLUSION In this paper, we have investigated two parametric DESs models, i.e., EFAs and EP-EFAs, and then established a preliminary opacity theory for parametric DESs, which lays a foundation to analyze the opacity for complex systems. The parametric DESs well extends the classic DESs by means of the symbolic transitions carrying predicates over the infinite parameter space. The parametric DESs can efficiently represent and process many real-world data with the help of SMT solvers. It has been illustrated that the coexistence of state and event parameters in the predicates not only enhances the parametric model but also complicates it. Specifically, we have proved that EFAs model is more expressive than EP-EFAs model, and also proved that the opacity properties of EFAs are undecidable in general. In addition, EP-EFAs model reduces the complexity of EFAs by removing the state parameter, which makes its opacity properties decidable. We have provided the verification algorithms for the current-state opacity, initial-state opacity and infinite-step opacity of EP-EFAs model, and discussed the complexity of these algorithms. One of the future work is to investigate the opacity enforcement of parametric DESs. Another work worthy of further investigation is to explore a more powerful parametric model whose opacity properties are still decidable. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (Grant No. 61876195), the Natural Science Foundation of Guangdong Province of China (Grant No. 2022A1515011136), the Special Projects in Key Fields Foundation of the Department of Education of Guangdong Province of China (Grant No. 2021ZDZX1043), Guangxi Science and Technology Project (No. Guike AD23026227) and the Project Improving the Basic Scientific Research Ability of Young and Middle-aged Teachers in Guangxi Universities of China (Grant No. 2021KY0591). 1 IEEEtran desbook C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 2nd Ed., New York, NY, USA: Springer, 2008. opacity-review R. Jacob, J. J. Lesage, and J. M. Faure, “Overview of discrete event systems opacity: models, validation, and quantification," Annual Reviews in Control, vol. 41, pp. 135-146, 2016. l-opacity F. Lin, “Opacity of discrete event systems and its applications," Automatica, vol. 47, no. 3, pp. 496-503, 2011. cso A. Saboori and C. N. Hadjicostis, “Notions of security and opacity in discrete event systems," in Proceeding of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 2007, pp. 5056-5061. iso A. Saboori and C. N. Hadjicostis, “Verification of initial-state opacity in security applications of DES," in Proceedings of the 9th International Workshop on Discrete Event Systems, Göteborg, Sweden, 2008, pp. 328-333. ifo Y. Wu and S. Lafortune, “Comparative analysis of related notions of opacity in centralized and coordinated architectures," Discrete Event Dynamic Systems, vol. 23, no. 3, pp. 307-339, 2013. infinite X. Yin and S. Lafortune, “A new approach for the verification of infinite-step and K-step opacity using two-way observers," Automatica, vol. 80, pp. 162-171, 2017. pre-opacity S. Yang and X. Yin, “Secure your intention: on notions of pre-opacity in discrete-event systems," IEEE Transactions on Automatic Control, DOI:10.1109/TAC.2022.3210148, 2022. supervisory-enforcement1 J. Dubreil, P. Darondeau, and H. Marchand, “Supervisory control for opacity," IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1089-1100, 2010. supervisory-enforcement2 Y. Xie, X. Yin, and S. Li, “Opacity enforcing supervisory control using non-deterministic supervisors," IEEE Transactions on Automatic Control, DOI: 10.1109/TAC.2021.3131125, 2021. output-enforcement1 X. Yin, S. Li , “Synthesis of dynamic masks for infinite-step opacity," IEEE Transactions on Automatic Control, vol. 65, no. 4, pp. 1429-1441, 2020. output-enforcement2 C. Keroglou and S. Lafortune, “Embedded insertion functions for opacity enforcement ," IEEE Transactions on Automatic Control, vol. 66, no. 9, pp. 4184-4191, 2021. output-enforcement3 X. Li , C. N. Hadjicostis and Z. Li, “Extended insertion functions for opacity enforcement in discrete-event systems," IEEE Transactions on Automatic Control, vol. 67, no. 10, pp. 5289-5303, 2022. timed-opacity F. Cassez, “The dark side of timed opacity," in Advances in Information Security and Assurance (Lecture Notes in Computer Science), Berlin, Germany: Springer, vol. 5576, 2009, pp. 21-30. timed-opacity2 L. Wang, N. Zhan, and J. An, “The opacity of real-time automata," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 11, pp. 2845-2856, 2018. network-opacity1 J. Yang, W. Deng, D. Qiu, and C. Jiang, “Opacity of networked discrete event systems," Information Sciences, vol. 543 pp. 328-344, 2021. network-opacity2 Z. Zhang, S. Shu, C. Xia, Networked opacity for finite state machine with bounded communication delays, Information Sciences, vol. 572, pp. 57-66, 2021. petri-opacity1 Y. Tong, Z. Li, C. Seatzu, and A. Giua, “Verification of state-based opacity using Petri nets," IEEE Transactions on Automatic Control, vol. 62, no. 6, pp. 2823-2837, 2017. petri-opacity2 X. Cong, M. Fanti, A. Mangini, and Z. Li, “On-line verification of current-state opacity by Petri nets and integer linear programming," Automatica, vol. 94, pp. 205-213, 2018. petri-opacity3 Y. Dong, Z. Li, and N. Wu, “Symbolic verification of current-state opacity of discrete event systems using Petri nets," IEEE Transactions on Systems, Man, and Cybernetics: Systems, DOI:10.1109/TSMC. 2022.3151695, 2022. opacity-cps1 X. Yin, M. Zamani, and S. Liu, “On approximate opacity of cyber-physical systems," IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1630-1645, 2021. opacity-cps2 S. Liu, A. Trivedi, X. Yin, and M. Zamani, “Secure-by-construction synthesis of cyber-physical systems," Annual Reviews in Control, vol. 53, pp. 30-50, 2022. probilistic-opacity1 A. Saboori and C. N. Hadjicostis, “Current-state opacity formulations in probabilistic finite automata," IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 120-133, 2014. probilistic-opacity2 X. Yin, Z. Li, W. Wang, and S. Li, “Infinite-step opacity and K-step opacity of stochastic discrete-event systems," Automatica, vol. 99, pp. 266-274, 2019. fuzzy-opacity1 W. Deng, D. Qiu, and J. Yang, “Opacity measures of fuzzy discrete event systems," IEEE Transactions on Fuzzy Systems, vol. 29, no. 9, pp. 2612-2622, 2021. fuzzy-opacity2 W. Deng, D. Qiu, and J. Yang, “Fuzzy infinite-step opacity measure of discrete event systems and its applications," IEEE Transactions on Fuzzy Systems, vol. 30, no. 3, pp. 885-892, 2022. efa1 Y. Chen and F. Lin, “Modeling of discrete event systems using finite state machines with parameters." in Proceedings of 2000 IEEE International Conference on Control Applications (CCA), Anchorage, Alaska, USA, 2000, pp. 941-946. efa2 L. Ouedraogo, R. Kumar, R. Malik, and K. Åkesson, “Nonblocking and safe control of discrete-event systems modeled as extended finite automata," IEEE Transactions on Automatation Science and Engineering, vol. 8, no. 3, pp. 560-569, 2011. efa3 M. A. Goorden, M. Fabian, J. M. Mortel-Fronczak et al., “Compositional coordinator synthesis of extended finite automata," Discrete Event Dynamic Systems, vol. 31, no. 3, pp. 317-348, 2021. learning S. Cassel, F. Howar, B. Jonsson, and B. Steffen, “Learning extended finite state machines," Formal Aspects of Computing, vol. 28, no. 2, pp. 233-263, 2016. esfa L. D'Antoni and M. Veanes, “Extended symbolic finite automata and transducers," Formal Methods in System Design, vol. 47, no. 1, pp. 93-119, 2015. sft M. Veanes, P. Hooimeijer, B. Livshits et al., “Symbolic finite state transducers: algorithms and applications," in Proceedings of the 39th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Philadelphia PA, USA, 2012, pp. 137-150. infinite-alphabet1 Abdulla, Parosh Aziz, et al. “General decidability theorems for infinite-state systems," in Proceedings 11th Annual IEEE Symposium on Logic in Computer Science. New Brunswick, NJ, USA, pp. 313-321, 1996. infinite-alphabet2 Segoufin L, Segoufin L, “Automata and logics for words and trees over an infinite alphabet," in Computer Science Logic: 20th International Workshop, Szeged, Hungary, Springer Berlin Heidelberg, pp. 41-57, 2006. infinite-alphabet3 F. Neven, T. Schwentick, and V. Vianu, “Finite state machines for strings over infinite alphabets," ACM Transactions on Computational Logic, vol. 5, no. 3, pp. 403-435, 2004. solver2 C. Barrett and C. Tinelli, “Satisfiability modulo theories," in Handbook of Model Checking, Cham, Switzerland: Springer, 2018, pp. 305-343. book-computation S. Michael, Introduction to the Theory of Computation, 3rd Ed., Boston, MA, USA: Cengage Learning, 2012. CM M. Minsky, Computation: Finite and Infinite Machines, 1st Ed., Englewood Cliffs, N. J., USA: Prentice-Hall, 1967. [ < g r a p h i c s > ] Weilin Deng received the B.S. and M.S. degrees in computer science from South China University of Technology, Guangzhou, China, in 2003 and 2008, respectively, and the Ph.D. degree in computer software and theory from Sun Yat-Sen University, Guangzhou, China, in 2016. From 2016 to 2019, he was an associate research fellow with Sun Yat-Sen University. He is currently an associate professor with Guangdong University of Finance. His current research interests include discrete-event systems, fuzzy/probabilistic systems and computations, and theoretical computer science. He is the author or co-author of more than 20 peer-review papers published in various academic journals and conferences, including IEEE TAC, IEEE TFS, IEEE CDC, INT J CONTROL and Information Sciences. [ < g r a p h i c s > ] Daowen Qiu received the M.S. degree in mathematics from Jiangxi Normal University, Nanchang, China, in 1993 and the Ph.D. degree in mathematics from Sun Yat-Sen University, Guangzhou, China, in 2000. During 2000 and 2001, he was a Postdoctoral Researcher in computer science with Tsinghua University, Beijing, China. Since August 2002, he has been associated with Sun Yat-Sen University, and then a Full Professor of computer science in May 2004. His current research interests include quantum computing, discrete-event systems, fuzzy and probabilistic computation, and he has focused on models of quantum and probabilistic computation, quantum information. He is the author or co-author of more than 160 peer-review papers published in various academic journals and conferences, including Information and Computation, Artificial Intelligence, Journal of Computer and System Sciences, Theoretical Computer Science, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B, IEEE TRANSACTIONS ON AUTOMATIC CONTROL, IEEE TRANSACTIONS ON FUZZY SYSTEMS, Physical Review A, Quantum Information and Computation, Journal of Physics A, and Science in China. He is an editor of Theoretical Computer Science. [ < g r a p h i c s > ] Jingkai Yang received the B.S. and M.S. degrees in mathematics from Guangxi Normal University, Guilin, China, in 2006 and 2009, respectively, and the Ph.D. degree in computer science and technology from Sun Yat-Sen University, Guangzhou, China, in 2022. He is currently an associate professor with Yulin Normal University. His main research interests include opacity analysis, supervisory control and failure diagnosis of discrete-event systems.
http://arxiv.org/abs/2307.06011v1
20230712084921
Reheating and Leptogenesis after Vector inflation
[ "Simon Cléry", "Pascal Anastasopoulos", "Yann Mambrini" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
α β̱ γ δ̣ ε ζ η θ ıι κ̨ łλ μ ν ξ π ρ̊ σ τ ϕ ψ χ̧ ω φ ϑ ϵ Γ Δ Θ ŁΛ Ξ Π Σ Υ Φ Ψ Χ Ω ϵ TeV GeV MeV keV equationsection a]Simon Cléry,b]Pascal Anastasopoulos,a]Yann Mambrini,[a] Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France[b] Institute of High Energy Physics, Austrian Academy of Sciences, Georg-Coch-Platz 2, 1010 Vienna, [email protected]@[email protected] We study the reheating and leptogenesis in the case of a vector inflaton. We concentrate on particle production during the phase of oscillating background, especially gravitational production induced by the presence of non-minimal coupling imposed by an isotropic and homogeneous Universe. Including processes involving the exchange of graviton, we then extend our study to decay into fermions via direct or anomalous couplings. The necessity of non-minimal gravitational coupling and the gauge nature of couplings to fermions implies a much richer phenomenology than for a scalar inflaton. Reheating and Leptogenesis after Vector inflation [ August 12, 2023 ================================================= § INTRODUCTION Inflation is one of the most successful solutions to the horizon and flatness problems <cit.> but inflationary models are multiple. They all describe with great success a phenomenon of rapid expansion by introducing one or several scalar fields, the inflaton(s) with an almost constant energy density during a slow-rolling phase <cit.>. In the meantime, its scalar nature ensures natural homogeneity and isotropy up to the quantum level of perturbation. On the other hand, the first articles in the literature already stressed the importance of solving the reheating problem <cit.>. Indeed, at the end of the inflationary phase, the inflaton(s) enter an oscillatory phase while in the meantime dissipating its energy into quanta forming the primordial plasma. This phase of conversion is called reheating <cit.>. Adding ad-hoc new couplings of the inflaton fields to fermions or bosons can be sufficient to finalize the reheating process <cit.>, even at a gravitational level <cit.> even if non-minimal couplings to the Ricci scalar are necessary to avoid the overproduction of primordial gravitational waves <cit.>. It is interesting to note that higher-spin bosonic fields have been overlooked since they naturally generate a priori inhomogeneities due to their spatial dependence. Whereas a first attempt at a vector inflaton was proposed in <cit.>, the authors of <cit.> showed that the presence of three orthogonal vector fields ensures at the same time a slow-roll regime equivalent to a scalar inflaton while generating a homogeneous Universe. The price to pay is easy to guess: the need for a conformal coupling to the Ricci scalar to cancel the vectorial nature of the field, while the presence of 3 orthogonal fields ensures that no specific direction in the expansion is privileged. It becomes then interesting to look into the details of reheating in this framework. Indeed, the presence of non-minimal coupling, necessary for the inflaton to slow-roll, ensure by default a portal between the inflaton and the standard model bath. Even if weak, it can complete the reheating process as it was shown for the case of a scalar inflaton in <cit.>. On the other hand, it is also natural to imagine the presence of couplings between the vector inflatons A_i and particles charged under it. If one aspires to embed our construction into a Grand Unified framework, the coupling to spin-1 fields, g_X, should be of the order of g_GUT≃ 0.5. This renders the model much more constrained than in the case of a scalar inflaton, where the reheating is usually considered through an unconstrained Yukawa-type coupling. Other decay processes are also present in the case of a vector inflaton and absent in the scalar case. These are the Chern-Simons couplings generated by the decoupling of anomalies <cit.>. Indeed, if a gauge structure is hidden under the presence of a vector inflaton field, it is natural to suppose the existence of heavy fermions. If their mass lies above the inflationary scale, at the GUT scale, for instance, their decoupling will generate effective three-vectorial vertices through triangle anomalies which could generate decay or new scattering processes inducing an effective reheating. Finally, an even more suggestive setup, especially in an SO(10) framework, would be the presence of a direct coupling between the inflaton fields A_i and the right-handed neutrinos N_R. In this case, the reheating becomes very efficient, and the decay process can even generate a sufficient lepton asymmetry to ensure successful baryogenesis. The paper is organized as follows. After reminding the physics in the presence of a vector inflaton in section 2, we compute and analyze the reheating phase in section 3. We consider first purely gravitational couplings, including the exchange of graviton, then direct gauge couplings to fermions before looking into the details of loop-induced coupling generated by triangle anomalies. Finally, we look into the details of leptogenesis, adding the presence of right-handed neutrino in section 4, before concluding. § THE FRAMEWORK We consider the framework of Vector inflation introduced in <cit.>, where cosmic inflation is driven by vector fields. The action in the Jordan frame is written [Note that such models may lead to instabilities at the perturbation level <cit.>, that can be cured by different modifications of potential and kinetic terms of the vector fields <cit.>.] S = ∫ d^4x √(-g)( -M_P^2/2Ω^2 ℛ -1/4F_μνF^μν + 1/2 M^2 A_μ A^μ) with Ω^2 = 1 + ξ A_μ A^μ/M_P^2 F_μν = ∇_μ A_ν - ∇_ν A_μ = ∂_μ A_ν - ∂_ν A_μ where we defined ℛ as the Ricci scalar curvature, M the mass of the vector field A_μ and ξ the non- minimal coupling to gravity of this vector field [M_P = (8 π G_N)^-1/2≃ 2.4 × 10^18 GeV is the reduced Planck mass.]. Note that ∇_μ is the covariant derivative, ∇_μ A_ν = ∂_μ A_ν -Γ_μν^α A_α We consider the usual FLRW metric with the following signature ds^2 = dt^2 -a^2(t)δ_ikdx^idx^k which provides the following jacobian determinant √(-g)=a^3(t). The variation of the action with respect to A^μ, assuming homogeneity of the fields, i.e ∀α, ∀ i, ∂_i A_α = 0, yields the following equations for time and spatial components of A^μ 1/√(-g)∂/∂ x^μ(√(-g)F^μν) + (M^2 -ξℛ)A^=0 , which solutions are A_0 = 0 Ä_̈j̈ + HȦ_̇j̇ + (M^2 -ξℛ)A_j=0 with H=ȧ/a and R=-6 ä/a-6 H^2=-6 Ḣ -12 H^2. From A^α A_α = A_0^2-A_i^2/a^2=-A_i^2/a^2 , it is natural to redefine a normalized conformal vector field B_μ=A_μ/a, which does not "feel" the effect of the expansion under the Lorentz transformation B^μ = g^μνB_ν. It should behave as a scalar field in the conformal theory. Indeed, the equation (<ref>) then becomes B̈_i + 3 H Ḃ_i + M^2 B_i +(1+6 ξ) [Ḣ + 2H^2]B_i=0 . In the same way, we can also extract from the action (<ref>) the energy density of the field B_i, T_00 with T_μν = 2/√(-g)δ S/δ g^μν . We obtained T_00=1/2Ḃ_i^2 + M^2/2B_i^2 +H^2/2(1+6 ξ) B_i^2+(1+6ξ)H B_i Ḃ_i . The interesting point concerning the above equations (<ref>) and (<ref>), is that whereas the non-minimal coupling ξ=-1/6 converts a massless scalar field into a conformal invariant field, the effect here is the opposite. It violates the classical conformal invariance of a massless vector field. Hence if we set ξ = -1/6, we recover the same equations of motion and the energy density for B_i as for a minimally coupled scalar inflaton. In other words, a non-minimally coupled vector field with ξ = -1/6 leads to the same dynamics as a minimally coupled scalar field in the cosmological background. Hence, it can mimic the inflaton dynamics in the standard framework of slowly rolling single scalar field ϕ, in a potential V(ϕ), which exhibits a plateau for large field values, with ϕ=B_i. However, to avoid a preferred direction for inflation which the vector field direction would give, one needs to introduce at least three mutually orthogonal vector fields B_i^(a), a=1,2,3, as it has been noted in <cit.>. To understand this specificity, we have to compute the pressure from the stress-energy tensor T_ij. We obtained 1/a^2T_ij = (M^2+6 ξḢ + (12 ξ-1)H^2) B_iB_j-Ḃ_i Ḃ_j -H B_i Ḃ_j - H Ḃ_i B_j +[((2 ξ-1/2)M^2+(9 ξ + 5/2)H^2 +12 ξ^2 Ḣ) B_i^2 +(1/2-2 ξ)Ḃ_i^2+(1+2 ξ)B_iḂ_i]δ_ij , with the pressure P_ij=T_ij. To visualize the phenomena, it is easier to choose the system of coordinates with the third axes aligned with B_i. In this system, B_i can be written as B_i=B δ_iz. During inflation, where we can neglect the time derivative of the fields and the Hubble rate, Eq.(<ref>) gives for the pressure in the x,y and z directions respectively, for ξ=- 1/6: P_xx=P_yy≃ + a^2 B^2 H^2    P_zz≃ -2 a^2 B^2 H^2 , where we supposed H≫ M during inflation. We clearly understand that such a Universe would extend exponentially in the z-direction, whereas it would be contracted in the x and y directions, more like a cigar shape than an isotropic shape. To circumvent this problem, the solution is obvious: one should symmetrize the system in all directions, adding two new orthogonal fields with the same amplitude B. It is then easy to show that the stress tensor T_ij (<ref>) becomes 1/a^2 T_ii=(1/2-6 ξ)Ḃ^2 +[ (6 ξ - 1/2)M^2+(39 ξ + 13/2)H^2 +(36 ξ^2 + 6 ξ)Ḣ]B^2 +(1+6 ξ)H B Ḃ . whereas the density of energy becomes T_00=3/2Ḃ^2 + 3/2M^2 B^2 +3/2(1+6 ξ)H Ḃ +3(1+6 ξ)H B_i Ḃ_i . For ξ=-1/6 one obtains ρ=3/2Ḃ^2 + 3/2 M^2 B^2 , P=3/2Ḃ^2-3/2M^2B^2 . It corresponds to the classical pressure and density of energy for a set of 3 scalar inflaton background fields, respecting the classical equation of motion (<ref>): B̈ + 3 H Ḃ + M^2 B =0 To achieve an inflationary mechanism, we now introduce a specific potential for the background fields, the same for each copy of these vector fields. Of course, many possible potentials V(|B|) can account for inflation. However, the relevant calculations during the reheating era are largely independent of the potential during inflation and depend only on the shape of the potential around the minimum. Without loss of generality, we will assume that V(|B|) is among the class of α-attractor V(|B|) = λ M_P^4[√(6)tanh(|B|/√(6)M_P)]^k where |B|^2=δ_ijδ_abB^(a)_iB_j^(b). The overall scale of the potential parameterized by the coupling λ can be determined from the amplitude of the CMB power spectrum A_S, λ≃18π^2 A_S/6^k/2N^2 , where N* is the number of e-folds measured from the end of inflation to the time when the pivot scale k_*=0.05  Mpc^-1 exits the horizon. In our analysis, we use ln(10^10A_S)=3.044<cit.> and set N*=55. This potential can be expanded near its minimum [Our discussion is general and not limited to T-models of inflation as the way we express the minimum of the potential is generic.] by V(|B|)= λ|B|^k/M_P^k-4 ; |B| ≪ M_P . In this class of models, inflation occurs at large field values (|B| ≫ M_P). After the exponential expansion period, the fields oscillate about the minimum, and the reheating process begins. The end of inflation may be defined when ä=0 where a is the cosmological scale factor, and we denote the inflaton energy density at a_ end by ρ_ end. In addition to the inflationary sector, we first assume no other couplings besides the non-minimal coupling to gravity of the vector fields. To extract the coupling between B and the Standard Model particles, we consider the conformal transformation between the Jordan frame and the Einstein frame, which involves the non-minimal coupling ξ, g_μν^(E)= Ω^2 g_μν^(J) where the superscripts stand respectively for the Einstein and the Jordan frame. It can be shown <cit.> that in the Jordan frame, the Ricci curvature expressed with the Einstein frame variables is given by R^(J) = Ω^2[ R^(E) + 6 g^μν∇_μ∇_νlog(Ω) - 6g^μν (∇_μlog(Ω))(∇_νlog(Ω))]. Noting that √(-g^(J)) = 1/Ω^4√(-g^(E)), the first term provides the usual Einstein-Hilbert action and hence usual gravity. The second term is a total derivative that will play no role in the action, and the last one can be expressed as a modification of the kinetic term for the vector fields A^μ. In the Einstein frame, the total action that includes the Standard Model (SM) fields and additional vector fields can be expressed as follows: S^(E) =∫ d^4x√(-g)[-M_P^2/2 R^(E) - 3/2Ω^2M_P^4∇_μ(ξ A_ν A^ν)∇^μ(ξ A_λ A^λ) -1/4F_μνF^μν + 1/Ω^2V(A_μ A^μ) .                           . -1/Ω^4V_h( H) + 1/Ω^2(D_μ H)^†(D^μ H) + ... ] where H denotes the Higgs complex scalar doublet in the SM. Note that the kinetic terms are not canonical. In what follows, we will be interested in the small-field limit corresponding to the post-inflationary phase |ξ A_μ A^μ|/M_P^2 ≪ 1 . In that case, we can expand the kinetic and potential terms in the action in powers of M_P^-2. We obtain a canonical kinetic term for the scalar and vector fields and deduce the leading-order interactions induced by the non- minimal couplings. The latter can be brought to the form ℒ_non-min. = -ξ/2M_P^2∂_α h ∂^α h A_μ A^μ + m_h^2ξ/M_P^2h^2 A_μ A^μ , Here we neglected the quartic coupling of the Higgs field in comparison with its mass m_h, and we considered N_h=4 real scalar degrees of freedom h, embedded in the complex scalar doublet H. In addition to this interaction generated in the Einstein frame by the non-minimal couplings to the gravity of the background fields, we should also consider the unavoidable graviton exchange to produce matter and radiation <cit.>. In the Einstein frame, the metric can be expanded locally around Minkowski space-time, g_μν≃η_μν+2h_μν/M_P[Note that this is the canonical normalization of spin-2 perturbation of the metric.]. Then the minimal gravitational interactions are described by the Lagrangian <cit.>√(-g) ℒ_ min= -1/M_P h_μν (T^μν_ SM+T^μν_A ) , where A refers to any background vector field. The canonical graviton propagator for momentum p is Π^μνρσ(p) = η^ρνη^σμ + η^ρμη^σν - η^ρση^μν/2p^2 . The form of the stress-energy tensor T^μν_s depends on the spin s of the field and, for massive vector field, takes the form T_1^μν = -1/2( F^μ_α F^να + F^ν_α F^μα) + 1/4 g^μν F^αβ F_αβ -1/2g^μν M^2 A_α A^α + M^2 A^μ A^ν , whereas for a scalar S, T^μν_0 = ∂^μ S ∂^ν S- g^μν[ 1/2∂^α S ∂_α S-V(S)] . § REHEATING §.§ Background scattering through gravitational portals From the action introduced in the precedent section, we can consider the couplings of the background vector fields responsible for inflation, which allow to produce relativistic quantas after inflation and to reheat the Universe. We first compute the equivalent reheating temperature generated by the gravitational portals, including the minimal and non-minimal coupling to the gravity of the background vector fields that oscillate after inflation. These result in direct couplings to the SM Higgs fields H that would constitute the primordial plasma. The minimal process of graviton exchange interferes with the direct production of Higgs bosons involving non-minimal coupling to gravity 1[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-1.75, 1.2) A_; (i2) at (-1.75,-1.2) A_; (l1) at (-0.5, 0); (r1) at (0.5, 0); (f1) at (1.75, 1.2) h; (f2) at (1.75,-1.2) h; * (i1) – [boson, style=red] (l1), (i2) – [boson, style=red] (l1), (f1) – [scalar] (r1), (f2) – [scalar] (r1), (r1) – [gluon, edge label'=h_] (l1);     +    1[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-1.75, 1.2) A_; (i2) at (-1.75,-1.2) A_; (m) at (0,0) ξ; (f1) at (1.75, 1.2) h; (f2) at (1.75,-1.2) h; * (i1) – [boson, style=red] (m), (i2) – [boson, style=red] (m), (f1) – [scalar] (m), (f2) – [scalar] (m); First, we make this observation on the background vector fields |A_μ A^μ|=|-1/a^2A_iA_i|=|B_iB_i| as A_0=0 from the homogeneity constraints. Hence, in every process involving the vector fields scattering, this is the physical vector field B_i, which is involved and which has only spatial components. We separate the slow varying envelop and the fast oscillating part of each polarization mode of the vector fields and then perform the Fourier expansion of the fast oscillating function B_i(t) =B(t) ∑_λ=1^3 ϵ_i,λ∑_n=1^∞𝒫_n e^-inω t The frequency of the oscillations can be obtained <cit.>ω = m_B √(π k/2(k-1))Γ(1/k+1/2)/Γ(1/k) where we defined the time-dependent effective mass of the condensate as m_B^2(t)=V^”(|B|(t))=k(k-1)λ M_P^2(|B|/M_P)^k-2 We consider that all polarization modes have the same dynamics, and we have the local completeness relation ∑_λ=1^3 ϵ_i,λϵ_j,λ^∗ =δ_ij The scattering amplitude related to the production rate of the processes A_μ + A_μ→ h_μν→SM^i+SM^i, can be parametrized by ℳ^1i∝ M_μν^1 Π^μνρσ M_ρσ^i , where i denotes the spin of the final state involved in the scattering process. The partial amplitudes, M_μν^i, for two fields of momenta (p_1, p_2) in initial or final state, are given by <cit.> M_μν^0 = 1/2[p_1μ p_2ν + p_1ν p_2μ - η_μνp_1· p_2 - η_μν V”(S)] , M_μν^1/2 = 1/4v̅(p_2) [ γ_μ (p_1-p_2)_ν + γ_ν (p_1-p_2)_μ] u(p_1) , M_μν^1 = 1/2[ ϵ_2·ϵ_1(p_1 μ p_2 ν+p_1 ν p_2 μ) -ϵ_2· p_1(p_2 μϵ_1 ν+ϵ_1 μ p_2 ν) - ϵ_1· p_2(p_1 νϵ_2 μ+p_1 μϵ_2 ν) +(p_1· p_2 +V”(B))(ϵ_1 μϵ_2 ν+ϵ_1 νϵ_2 μ) +η_μν(ϵ_2· p_1ϵ_1· p_2-(p_1· p_2 + V”(B) ) ϵ_2·ϵ_1) ] , We obtain the transition amplitude involving the scattering of the background oscillating modes of the vector fields through the gravitational portals -iℳ^(n)≃ -i |B|^2/M_P^2𝒫^(2)_n( M_μν^1 Π^μνρσ M_ρσ^0 + ξE_n^2/4 (ϵ_1·ϵ_2) ) where we have neglected the Higgs mass m_h in comparison with energy available in the vector condensate √(s)=E_n=2nω≫ m_h[Here s is the Mandelstam variable.]. We have also introduced the Fourier coefficients, 𝒫_n^(2), associated with the expansion of the function B(t)^2, in opposition with 𝒫_n, that are associated with the Fourier expansion of B(t) as defined in <ref>. Then, summing over identical final states, symmetrizing, and averaging over initial states, we recover the unpolarized square amplitude [The factors of 2 in front of the amplitude square accounts for the sum over identical final states and symmetry, while the factor of 9 is for averaging over the initial state spins.] |ℳ̅^(n)|^2 = 1/92^2/2|B|^4/64 M_P^4|𝒫^(2)_n|^2 3/4E_n^4 (16ξ^2 -8ξ +3) Integrating the phase space with the initial state at rest and the two-body final state, we have the following rate of particle production R = 3× 2×N_h/8π∑_n=1^∞|ℳ̅^(n)|^2 = N_h/512π|B|^4/M_P^4(16ξ^2 -8ξ +3)∑_n=1^∞E_n^4|𝒫^(2)_n|^2 From the amplitude of the vectors fields, we can define the total energy densities they are carrying as a homogeneous condensate <cit.>ρ_B = 3/2( |̇Ḃ|̇^2 + 2V(|B|) ) where the factor of 3 accounts for the number of vector fields needed to impose a homogeneous and isotropic inflationary phase. During the reheating phase, we have to solve the following set of coupled Boltzmann equations dρ_B/dt + 3(1+w_B)Hρ_B ≃ 0 dρ_R/dt + 4Hρ_R = R× E_n where we neglected the depletion rate of the condensate until the reheating ended in the first equation, and we introduced the average equation of state for the condensate. Following the well-known result for scalar fields oscillating in the potential Eq.(<ref>), we have w_B = k-2/k+2. The first equation can be integrated to give <cit.>ρ_B = ρ_ end(/a)^6k/k+2 where = 3M_P^2H_ end^2 is the energy density in the entire background at the end of inflation. Using this solution to express the condensate amplitude during the oscillations, |B|=M_P (ρ_B/3λ M_P^4)^1/kM_P, we can integrate the second equation and obtain the evolution of radiation energy density in the form of Higgs bosons as a function of the scale factor for <a< ρ_R(a) ≃ 3^1-2k/kN_h M_P^4λ^1/k k^5/512π(16ξ^2 -8ξ +3)(π/2)^5/2(Γ(1/2+1/k)/Γ(1/k))^5 ×(k+2/8k-14)∑_n=1^∞n^5|𝒫^(2)_n|^2(/M_P^4)^2k-1/k(/a)^4 We introduce the same notation as <cit.>ρ_R(a) ≃α_k^ξ M_P^4 (k+2/8k-14)(/M_P^4)^2k-1/k(/a)^4 with α_k^ξ = 3^1-2k/kN_h M_P^4λ^1/k k^5/512π(16ξ^2 -8ξ +3)(π/2)^5/2(Γ(1/2+1/k)/Γ(1/k))^5 ∑_n=1^∞n^5|𝒫^(2)_n|^2 From this expression of the radiation energy density, defining the end of reheating as ρ_B() = ρ_R(), we obtain the expression of the reheating temperature for this process <cit.>π^2 g^*_ RH/30^4 = M_P^4 (/M_P^4)^4k-7/k-4(α_k^ξ (k+2)/8k-14)^3k/k-4 We show in Figure (<ref>) the evolution of as a function of the shape of the potential near the minimum, k∈[6,20], requiring that ξ=-1/6 to have successful inflation from these background vector fields. In this plot we considered the values N_h=4, ρ_ end = (5× 10^15)^4 ^4 and λ introduce in the first section Eq.(<ref>). As we can see, the needed non-minimal coupling to the gravity of the vector fields for successful inflation imposes an unavoidable lower bound on the reheating temperature reached through the perturbative processes depicted in this section. Hence, in the vector inflation framework, depending on the equation of state during reheating, i.e., depending on the leading order term in the expansion of the inflaton potential near the minimum, the reheating temperature of the Universe is constrained to be higher than the one given by the green line in Fig.(<ref>). We also show the region (blue-shaded) in the parameter space (, w_B), which is currently excluded by too excessive enhancement of primordial gravitational waves (GWs). Indeed, GWs generated by quantum fluctuations during inflation, followed by a reheating era where the inflaton energy redshifts faster than radiation, results in an enhancement of GWs spectrum <cit.>. This effect places a constraint from excessive GWs as dark radiation at BBN and CMB time <cit.> and offers a signal with a distinctive spectrum depending on the equation of state during w_B. In fact, we see in Fig.(<ref>) that we cannot rely solely on gravitational effects to reheat the Universe in the vector inflaton scenario, at least for w_B≤ 0.76 (corresponding to k≤ 15). For a higher equation of state, gravitational reheating is not excluded yet but is close to the exclusion region, making it possible to be probed by future GWs detectors. It becomes necessary to look for more efficient reheating mechanisms, also naturally present in the context of vector inflation, as we proposed in the rest of the papers. §.§ Decay of the inflaton background towards fermions Besides the unavoidable couplings to the gravitational sector required to achieve successful inflation, the additional vector fields can also couple to fermions of the SM through gauge couplings. Let's consider a new U(1)_X gauge symmetry <cit.> associated with the vector fields responsible for inflation and ask SM fermions f to be charged under this new gauge group with charges q_f and gauge coupling g_X. We can, of course, image much larger groups of symmetry for this additional "dark" gauge. However, usually, these extensions would embed the SM gauge group in a larger set of non-abelian symmetry transformations, many of which mix quarks and leptons. We consider the simple possibility of only one additional abelian gauge group U(1)_X. This additional gauge coupling allows for the vector inflatons to decay into SM fermions at tree level. We emphasize that the vector inflaton scenario, contrary to the standard scalar inflaton, provides a natural large coupling to fermionic states. Indeed, in the vanilla reheating scenario for a scalar inflaton ϕ, we rely on an effective Yukawa-like coupling of the form ∝ yϕ ff̅. However, this coupling should only emerge as an effective coupling generated by heavy fermions integrated out and is not a fundamental coupling of the theory, ϕ is not charged under the SM gauge group. On the other hand, in the vector inflaton scenario, gauge coupling to fermions arising from a GUT framework is natural and only constrained by the fundamental symmetry imposed on the Universe. To compute the reheating temperature generated by the decay of the inflaton into fermions, we consider the following model for interaction between fermions and vector inflaton fields ℒ⊃ -1/4 F_μνF^μν -q_fg_Xf̅A^μγ_μ f + if̅∂f 1[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (1, 1.2) f̅; (i2) at (1,-1.2) f; (m) at (0,0); (e1) at (-1.5,0) A_ ; * (m) – [boson, style=red] (e1), (i1) – [fermion] (m), (m) – [fermion] (i2); The fermions can be considered massless in comparison to the energy transferred through the inflaton decay. The unpolarized squared amplitude associated with the process depicted above is then given by |ℳ̅^(n)|^2 = 1/34q_f^2g_X^2|B|^2|𝒫_n|^2E_n^2 where the subscript n corresponds to the n-th oscillating mode of each background vector field B(t). We can compute from this transition amplitude the rate of production of these fermions from the background R=2× 3/8π∑_n=1^+∞|ℳ̅^(n)|^2 the factor of 2 stands for the fact that two fermions are produced per decay, whereas the factor 3 corresponds to the three orthogonal vector fields. Note that We now have to solve the system of Boltzmann equations (<ref>) for radiation (in the form of fermionic particles) and background energy densities. Solutions for ρ_R are the following in the limit a≫ a_ endρ_R(a) ≃∑_i (q_f^i)^2 N_f^i ×3^1-k/kg_X^2 λ^1/kM_P^4k^3/π(π/2)^3/2(Γ(1/k + 1/2)/Γ(1/k))^3 ∑_n=1^∞n^3|𝒫_n|^2 ×(k+2/14-2k)(/M_P^4)^k-1/k(/a)^6k-6/k+2         (k<7) ρ_R(a) = ∑_i (q_f^i)^2 N_f^i ×343 g_X^2λ^1/7M_P^4/3^6/7π(π/2)^3/2(Γ(9/14)/Γ(1/7))^3 ∑_n=1^∞n^3|𝒫_n|^2 ×(/M_P^4)^6/7(/a)^4log(a/)                (k=7) ρ_R(a) ≃∑_i (q_f^i)^2 N_f^i ×3^1-k/kg_X^2 λ^1/kM_P^4k^3/π(π/2)^3/2(Γ(1/k + 1/2)/Γ(1/k))^3 ∑_n=1^∞n^3|𝒫_n|^2 ×(k+2/2k-14)(/M_P^4)^k-1/k(/a)^4             (k>7) where N_f^i accounts for the number of fermionic states for each SM fermion family produced by the vector inflaton decays. The assignment of charge under U(1)_X of each fermion is given in the following section, in table <ref>. To evaluate the reheating temperature, we take for simplicity the same charge for every fermion under U(1)_X. From these expressions, we can compute the associated reheating temperature as a function of the model parameters for each value of k π^2 g^*_ RH/30^4 = M_P^4 (α_k (k+2)/14-2k)^k                            (k<7) π^2 g^*_ RH/30^4 ∼ M_P^4 (α_7)^7                                         (k=7) π^2 g^*_ RH/30^4 = M_P^4 (ρ_ end/M_P^4)^k-7/k-4(α_k (k+2)/2k-14)^3k/k-4         (k>7) with α_k = ∑_i (q_f^i)^2 N_f^i 3^1-k/kg_X^2 λ^1/kk^3/π(π/2)^3/2(Γ(1/k + 1/2)/Γ(1/k))^3 ∑_n=1^∞n^3|𝒫_n|^2 where, in the specific case of k=7, we neglected the logarithmic dependence of the radiation energy density with respect to the scale factor to obtain an approximate reheating temperature. In our numerical analysis, we underline that the true numerically evaluated value is used for k=7. We also note that for k ≤ 7, inflaton decay provides a reheating temperature independent of , which is a usual result for the generic decay rate of inflaton towards bosons or fermions. On the other hand, for k>7, the reheating temperature depends strongly on . We provide in the table <ref> the sums of Fourier coefficients, numerically evaluated, needed to compute the reheating temperature from the decay of vector inflaton towards SM fermions. We illustrate in Fig.(<ref>) the reheating temperature associated with the decay of the vector inflaton background as a function of the equation of state of the inflaton fluid for different values of the gauge coupling g_X. However, the charges q_f are not completely free. Indeed, asking for the Standard Model terms to be invariant under U(1)_X limits the charges q_f allowed. We show in table <ref> the assignment of charges for fermions as a function of a free parameter x<cit.>. To illustrate the result of the vector inflaton decay to reheat the Universe, we considered in Fig.(<ref>) x=1. We underline that the U(1)_X gauge lets the freedom to choose arbitrary large values of x, leading to potentially very large couplings of fermions to vector inflaton. In the specific case of x=0, we recover the well known B-L gauge <cit.>. As it can be seen, the Universe reaches a high reheating temperature up to T_ RH∼ 10^13, for w>0.6 (k>8), and g_X ∼ 0.1. This is not surprising as the gauge coupling at high scale in a unified scenario is usually quite high. Moreover, there are no kinetic suppressions due to effective mass, which are usually present with Yukawa-type couplings to the inflaton field <cit.>. This value of gauge coupling is, moreover, natural from a GUT point of view, where g_X≃ g_GUT≃ 0.5. This corresponds to an almost instantaneous perturbative reheating process after inflation. If one wants to be more precise, taking into account the dependence of the gauge coupling with the energy, one should consider the amplitude |ℳ̅^(n)|^2 = 4/3q_f^2|B|^2 × g_X^2(E_n)|𝒫_n|^2E_n^2 There is no analytical solution for ρ_R in this case, and g_X(E_n) depends on the breaking scheme of the unified group. As an example, we show in Fig.(<ref>), black line, the case of the breaking pattern SO(10) → SU(4)× SU(2)_L× U(1)_R  [ 126] → SU(3)_c× SU(2)_L× U(1)_Y where g_X(T_ GUT) = 0.53, with T_ GUT = 1.7× 10^15. The gauge coupling runs very slightly down to its value at E_n=M(). One of the main conclusion of our analysis, is that for GUT–like couplings, we expect a large reheating temperature due to the efficient decay of the vector inflaton into charged particles. §.§ Effective anomalous couplings of inflaton background In the preceding section, we considered that the vector fields could decay through their couplings to SM massless fermions. However, we can imagine a spectrum where only heavy fermions are coupled to the background vector fields and are simultaneously charged under other gauges. In this case, if the heavy fermions are heavier than the (effective) mass of the background fields, they must be integrated. We underline that such anomalous effective couplings are generic <cit.> and can arise from a complete anomaly-free theory, considering the SM spectrum and additional massless fermions, vector fields, and scalars. Under a spontaneous symmetry-breaking mechanism that can also provide masses to vector fields and fermions via the VEV of a scalar field (Stueckelberg and Higgs mechanism), heavy fermions can be much heavier than the massive vector fields. These heavy fermions will, in this case, generate effective anomalous couplings between the background fields and the other gauge fields through generalized Chern Simons terms (GCS) and axionic couplings. They take the form <cit.> Γ^AYY_μνρσ|_ axion ∝ g_Xg_jg_k t_Ajkp_μ/p^2_νρστk_2^σ k_1^τ Γ^AYY_μνρσ|_ GCS  ∝ g_Xg_jg_k t_Ajk_μνρσ( k^σ_2 - k^σ_1 ) Γ^AAY_μνρσ|_ axion ∝ g_X^2g_i t_AAi(k_1ν/k_1^2_ρμτσ + k_2ρ/k_2^2_μντσ)k_2^σ k_1^τ Γ^AAY_μνρσ|_ GCS  ∝ g_X^2g_i t_Ajk_μνρσ( p^σ_2 - p^σ_1 ) where (k_1, k_2) are the outgoing momenta and (p, p_1,p_2) the ingoing momenta. The different effective vertices are depicted in the figure below. 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (0.75, 1.2) Y_; (i2) at (0.75,-1.2) Y_; (f1) at (0,0.75); (f2) at (0,-0.75); (f3) at (-1.25,0); (e1) at (-2.5,0) A_ ; * (f1) – [boson, style=red] (i1), (f1) – [fermion] (f2), (i2) – [boson, style=red] (f2), (f3) – [fermion, edge label=heavy ψ] (f1), (f2) – [fermion] (f3), (e1) – [boson, style=red] (f3) ; → 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (0.75, 1.2) Y_; (i2) at (0.75,-1.2) Y_; (f3) at (-0.25,0); (e1) at (-1.5,0) A_ ; * (f3) – [boson, style=red] (i1), (i2) – [boson, style=red] (f3), (e1) – [boson, style=red] (f3) ;  , 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (0.75, 1.2) Y_; (i2) at (0.75,-1.2) Y_; (f1) at (0,0.5); (f2) at (0,-0.5); (ff3) at (-0.5,0); (f3) at (-1.25,0); (e1) at (-2.5,0) A_ ; * (f1) – [boson, style=red] (i1), (f1) – [fermion] (f2), (i2) – [boson, style=red] (f2), (ff3) – [fermion, edge label=heavy ψ] (f1), (f2) – [fermion] (ff3), (ff3) – [scalar] (f3), (e1) – [boson, style=red] (f3) ; → 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (0.75, 1.2) Y_; (i2) at (0.75,-1.2) Y_; (ff3) at (-0.25,0); (f3) at (-1,0); (e1) at (-2,0) A_ ; * (ff3) – [boson, style=red] (i1), (i2) – [boson, style=red] (ff3), (ff3) – [scalar] (f3), (e1) – [boson, style=red] (f3) ; 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-0.75, 1.2) A_; (i2) at (-0.75,-1.2) A_; (f1) at (0,0.75); (f2) at (0,-0.75); (f3) at (1.25,0); (e1) at (2.5,0) Y_ ; * (f1) – [boson, style=red] (i1), (f1) – [fermion] (f2), (i2) – [boson, style=red] (f2), (f3) – [fermion, edge label'=heavy ψ] (f1), (f2) – [fermion] (f3), (e1) – [boson, style=red] (f3) ; → 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-0.75, 1.2) A_; (i2) at (-0.75,-1.2) A_; (f3) at (0.25,0); (e1) at (1.5,0) Y_ ; * (f3) – [boson, style=red] (i1), (i2) – [boson, style=red] (f3), (e1) – [boson, style=red] (f3) ;  ,  0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-0.75, 1.2) A_; (i2) at (-0.75,-1.2) A_; (f1) at (0,0.5); (f2) at (0,-0.5); (ff3) at (0.5,0); (f3) at (1.25,0); (e1) at (2.5,0) Y_ ; * (f1) – [boson, style=red] (i1), (f1) – [fermion] (f2), (i2) – [boson, style=red] (f2), (ff3) – [fermion, edge label'=heavy ψ] (f1), (f2) – [fermion] (ff3), (ff3) – [scalar] (f3), (e1) – [boson, style=red] (f3) ; → 0.85[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-0.75, 1.2) A_; (i2) at (-0.75,-1.2) A_; (ff3) at (0.25,0); (f3) at (1,0); (e1) at (2,0) Y_ ; * (ff3) – [boson, style=red] (i1), (i2) – [boson, style=red] (ff3), (ff3) – [scalar] (f3), (e1) – [boson, style=red] (f3) ;          However, we note first that in the case of on-shell ingoing and outgoing spin-one fields, the axionic couplings do not contribute to any background decay or scattering processes, as the contraction with external polarization vector with the different momenta vanishes. In addition, massive spin-one decay towards massless gauge bosons is forbidden by the Landau-Yang theorem. Then, the processes involving A→ YY are forbidden. Let us consider the last possibility of vector field scattering towards a massless gauge field through the effective GCS coupling. We see that for background fields, coherently oscillating, we have a vanishing contribution as p_1 = p_2 in this case (it is as considering the rest frame of the massive vector fields). The only non-vanishing contribution from these effective couplings, arising from heavy fermions in the spectrum, seems to be "bremsstrahlung" like emission of massless gauge fields with a massive vector field A_μ in the final state. This process should participate in the emission of radiation from the background as well as in the fragmentation of the background (loss of coherence in the final state for the vector field) but is a negligible contribution to reheating. Finally, we want to discuss the possibility of a next-order process that could contribute to reheating the Universe through effective couplings. We can consider a four-point one-loop amplitude with two background vector fields and two gauge fields of the SM through a "box" or "light to light" process <cit.>. 1[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-1.75, 1.2) A_; (i2) at (-1.75,-1.2) A_; (l1) at (-0.5, 0.5); (l2) at (-0.5,-0.5); (r1) at (0.5, 0.5); (r2) at (0.5,-0.5); (f1) at (1.75, 1.2) Y_; (f2) at (1.75,-1.2) Y_; * (i1) – [boson, style=red] (l1), (i2) – [boson, style=red] (l2), (f1) – [boson, style=red] (r1), (f2) – [boson, style=red] (r2), (l1) – [fermion] (l2), (l2) – [fermion] (r2), (r2) – [fermion, edge label'=heavy ψ] (r1), (r1) – [fermion] (l1);    →   1[baseline=-0.1cm] [every blob=/tikz/fill=gray!30,/tikz/inner sep=2pt] (i1) at (-1.75, 1.2) A_; (i2) at (-1.75,-1.2) A_; (m) at (0,0); (f1) at (1.75, 1.2) Y_; (f2) at (1.75,-1.2) Y_; * (i1) – [boson, style=red] (m), (i2) – [boson, style=red] (m), (f1) – [boson, style=red] (m), (f2) – [boson, style=red] (m); Considering again heavy fermions circulating in the loop, one can extract the effective couplings integrating out the messengers obtaining the following effective Lagrangian[Similar couplings are also expected with heavy bosonic messengers like in <cit.>.]<cit.> √(-g)ℒ_ eff ⊃ -g_X^2g_SM^2/Λ^4 90(4π)^2[G_μνG^μνF_ρσF^ρσ + 7/4G_μνG̃^μν F_ρσF̃^ρσ.                   . +7/2G_μνG̃_ρσF^μνF̃^ρσ +2G_μνG_ρσF^μνF^ρσ] where Λ is the heavy mass scale associated with the fermions circulating in the loop, G is the field strength of any SM gauge field, F is the field strength of the background vector field, and F̃^μν = 1/2_μναβF^αβ, is the dual field strength. These effective couplings can generate 2 to 2 processes A_iA_i → SM  SM that contributes to reheating as depicted in Figure (<ref>). However, this contribution is suppressed by the 4-point vertices and provides amplitude of the form -iℳ^(n)∝ ig_X^2g_SM^2/1440π^2Λ^4|B|^2𝒫^(2)_n E_n^4 After summing over the final states and averaging over the initial state, we have the following unpolarized square amplitude for the box diagram |ℳ̅^(n)|^2 = 1/92^2/21123 g_SM^4g_X^4/22118400π^4Λ^8|B|^4 |𝒫^(2)_n|^2 E_n^8 leading to the following rate of SM gauge bosons production R = 3× 2×N_Y/8π∑_n=1^∞|ℳ̅^(n)|^2 = 1123N_Y g_SM^4g_X^4/132710400π^5Λ^8 |B|^4 ∑_n=1^∞ E_n^8|𝒫^(2)_n|^2 where we consider g_SM = 0.1 and N_Y = 12 for SM gauge fields. To compare this process of radiation production with the one of gravitational scattering discussed in Section <ref>, we compute the ratio of the anomaly-induced process rate (<ref>) over the gravitational rate (<ref>), evaluated at the end of inflation when a=. In fact, if the anomalous induced box rate is subdominant at the end of inflation, it will stay subdominant during the whole reheating process, as it scales with a higher power of E_n than the gravitational rate of radiation production. Indeed, E_n(a) decreases as the Universe expands, so the ratio of the rates is always decreasing with time. We show in the left panel of Fig.(<ref>) the ratio of these rates at the end of inflation, as a function of the effective mass scale Λ, for different values of g_X. We see that for GUT-like gauge couplings and Λ≤ 10^15, the effective couplings can be competitive with the gravitational effects and can be even much more efficient for a quite small effective scale Λ≃10^14. On the right panel of Fig.(<ref>), we computed the reheating temperature obtained by combining the anomalous box diagrams and the gravitational process. We recover the preceding result, noting that the reheating temperature can be efficiently enhanced as soon as the effective mass scale of the fermions ranges below Λ≲ 10^15 GeV, even reaching T_ RH≳10^12 GeV for GUT-type couplings g_X and Λ≃ 5× 10^13 GeV. § LEPTOGENESIS §.§ Right handed neutrinos We can finally consider the possibility that the vector inflatons produce right-handed neutrinos (RHN), which could contribute through their decay to the generation of a lepton asymmetry. To realize this possibility, we ask for the RHN also to be charged under the additional U(1)_X gauge group associated with the background vector fields. We should also rely on the existence of an additional scalar degree of freedom S in addition to the SM Higgs field H, which would be responsible for the mass scale of the new sector after a spontaneous symmetry-breaking mechanism <cit.>. This additional scalar acquires a vacuum expectation value (VEV) v_S, different from the electroweak (EW) vacuum, v_H, and gives mass to the vector fields through the Stueckelberg mechanism. The two energy scales are decoupled, and we can assume that the additional gauge group is broken at really high energy, close to the inflationary scale. Then, the VEV of this new scalar can also generate a large Majorana mass, M_N, to the RHN through Yukawa-like couplings between RHN and the new scalar. Finally, the Higgs field can have a specific charge under the new gauge U(1)_X to not spoil the minimal Yukawa sector for SM fermions. In this case, Yukawa-like interactions between RHN, SM Higgs doublet, and left-handed SM fermions are allowed and would generate Dirac mass, m_D for neutrinos after EW symmetry breaking, depending on the VEV, v_H, of the Higgs field. Hence, the additional gauge, as well as the existence of RHN, can explain the tiny mass of the active neutrinos through the well-known seesaw mechanism <cit.>. We remind the readers that there are three types of seesaw models, which differ by the nature of the exchanged heavy particles in the model: (i) Type-I: SM gauge fermion singlets (ii) Type-II: SM SU(2)_L scalar triplets (iii) Type-III: SM SU(2)_L fermion triplets. We consider here the Type-I scenario that can be realized with only two generations of right-handed neutrino <cit.>. In this model, the light active neutrinos acquire their mass through the seesaw suppression of the order m_ν i∼(y_N)_ii^2⟨ H ⟩^2/M_N_i. In what follows, we will consider the production of one generation of RHN, but the discussion can be generalized to additional RHN families. §.§ RHN production and Leptogenesis We consider RHN coupled to the vector fields through the coupling ℒ_ int⊃ q_N g_XN_Rγ_μ A^μ N_R , where g_X is the gauge coupling associated to U(1)_X and q_N is the charge of the RHN. This coupling allows producing the RHN out-of-equilibrium from the background, as long as they are less massive than the vector fields, i.e., M_N<M[N_R fields are not mass eigenstates, but M_N can approximate the mass of these fields in the limit M_N≫ m_D.] and the number density of RHN produced is the following at the end of the reheating phase n_N(a_ RH)= √(3)q_N^2g_X^2M_P^3/72k^2(k+2)(Γ(1/k+1/2)/Γ(1/k))^2∑_n=1^∞n^2|𝒫_n|^2(ρ_ RH/M_P^4)^1/2 , where = π^2g^∗_ RH/30^4. We also provide in the table <ref> the sums of Fourier coefficients, numerically evaluated, needed to compute the number density of RHN produced from the decay of vector inflaton. Then these RHN can also be coupled to SM leptons, L, and Higgs boson, H, through a Yukawa-like coupling, y_N ≃ m_ν M_N/ v_H^2, where ⟨ H⟩≡ v_H ≈ 174 GeV is the SM Higgs doublet VEV, ℒ_ int⊃ y_N LH̃N_R if the charge of H under U(1)_X is opposite to the sum of the charges of N_R and L. We provide the charge assignments required under SM gauge and the additional U(1)_X, for SM states and RHN, in table <ref>. These charge assignments ensure that SM Yukawa terms and the additional Yukawa terms for RHN are U(1)_X invariant. The above coupling allows the heavy RHH to experience out-of-equilibrium decay towards Higgs bosons and leptons. The resulting CP-asymmetry is <cit.>ϵ_Δ L = ∑_α[Γ(N_R → L_α+H)-Γ(N_R →L_α+H^*)]/∑_α[Γ(N_R→ L_α+H)+Γ(N_R →L_α+H^*)] . It is generated by the interference between tree-level and one-loop processes and can produce an out-of-equilibrium lepton asymmetry, depending on the abundance of RHN produced by the background. The CP asymmetry can be expressed as <cit.>ϵ_Δ L≃3 δ_ eff/16 π M_N m_ν ,max/v_H^2 , where δ_eff is the effective CP violating phase in the neutrino mass matrix with 0≤δ_ eff≤ 1, and we take m_ν,max = 0.05 eV as the heaviest light neutrino mass. The produced lepton asymmetry is eventually converted to baryon asymmetry via electroweak sphaleron processes leading to Y_B = n_B/s = 28/79ϵ_Δ L n_N()/s , where n_N() is the number density of RHN at the end of reheating and s = 2 π^2 g_ RH^3 /45 is the entropy density. The final asymmetry then becomes Y_B ≃ 8.7× 10^-11 δ_ eff(m_ν,max/0.05 eV) (M_N/2.5×10^6 GeV) .n_N/s|_ , while the observed value, as reported by Planck <cit.>, is Y_B^obs≃ 8.7× 10^-11. We present in Fig.(<ref>) the results for the generation of baryon asymmetry through the decay of RHN, produced by the vector inflaton background. On the left part, we show the baryon asymmetry generated as a function of the equation of state during reheating, w_B, for different values of the gauge couplings g_X. The mass of RHN is fixed to M_N=10^3. We observe that, on the one hand, decreasing the gauge coupling decreases the production of RHN. We should expect a decrease in the baryonic asymmetry. However, lower values of g_X also decrease the reheating temperature , see Fig.(<ref>), allowing for a lower entropy density s at the end of the reheating phase, increasing then Y_B, see Eq.(<ref>). The complexity of the intricacy can be observed on the left side of Figure (<ref>) and is heavily influenced by the equation of state, w_B. The baryonic asymmetry following Eq.(<ref>) increases for decreasing g_X when the equation of states 1/3 ≤ w_B ≤ 5/9. Indeed, in these cases, the radiation energy density decreases much faster than the inflation background energy density while the Universe expands. This effect is more prominent for lower values of the gauge coupling, as for large values of g_X, the reheating is almost instantaneous, and the difference of equation of state is not important. This can be shown by the flatness of the black line for g_X = 0.53. As a consequence, we see on the right part of Fig.(<ref>) that the necessary RHN mass to produce the observed baryon asymmetry can be as low as M_N ≥ 10^2 when 1/3 ≤ w_B ≤ 5/9, for g_X=10^-3. For larger values of g_X, the dependence on M_N as a function of k is then less important, and the mass for which the right amount of asymmetry is produced converges towards M_N ≃ 10^5. This can also be understood by the fact that reheating is very fast in this case and does not depend on the equation of state w_B. Finally, the generation of a lepton asymmetry through the decay of the RHN should not be washed out by inverse decays of SM particles that are thermalized in the bath. Inverse decays are potentially dangerous until T = M_N, where these processes become exponentially suppressed, and the inverse decay rate is given by Γ_ th = y_N^2T/8π. For the masses M_N that satisfy the right baryon asymmetry in Figure <ref>, we see that the decay of RHN occurs after the reheating, i.e., M_N<, for the corresponding values of g_X and w_B. Then, we recover that at T=M_N, this decay rate is always smaller than H(T=M_N), if M_N<(8π v_H^2)/(√(3) m_ν^2M_P) ≃ 10^8. In the last equality, we have considered the maximum value for the bound on active neutrino mass, m_ν, max = 0.05 eV. Hence, most of the parameter space we consider cannot be spoiled by inverse decays except for the case w_B=0 and g_X<0.1. For this case, the lepton asymmetry generated may be suppressed by inverse decays of SM particles. § CONCLUSIONS In this work, we studied the reheating and leptogenesis in the case of vector inflatons A_μ. We concentrated on particle perturbative production during the oscillating background phase, first insisting on the gravitational production induced by the presence of non-minimal coupling imposed by an isotropic and homogeneous Universe. We then included processes involving the graviton exchange and compute the reheating temperature by combining the minimal and non-minimal sources. The result illustrated in Fig.(<ref>) shows that reasonable reheating temperature can be reached only for large equation of state parameter w. We then extended our study to decay into fermions via direct or anomalous couplings. We show that large reheating temperature can be induced by such decay due to the gauge nature of the coupling, see Fig.(<ref>). We also study the existence of couplings appearing through the mechanisms of anomaly cancellation. The presence of Chern-Simons terms allow also the reheating to proceed, dominating over the gravitational production for an effective mass scale Λ≲ 10^15 GeV as one can see in Fig.(<ref>) left. Such anomalous processes lead, however, to much lower reheating temperature than direct decays of A_μ, even if it can reach T_ RH≳ 10^12 GeV for a heavy fermions mass scale Λ∼ 10^13 GeV and GUT-type coupling g_X as it can be seen in Fig.(<ref>) right. Finally, we looked at the possibility of generating the baryon asymmetry through a leptogenesis process, coupling the right-handed neutrino with A_μ. Such coupling can easily be justified by gauging an extra U(1)_X. We show that for GUT-like coupling g_X, it is possible to satisfy, at the same time, an efficient reheating process while producing a sufficient amount of baryonic asymmetry as it is shown in Fig.(<ref>), with M_N ≃ 10^5 GeV. Finally, we noted that the nature of vectorial coupling avoids the generation of large effective mass, as is the case for a scalar inflaton which forbids the kinematic suppression of production mechanisms. The vectorial nature of the coupling should also drastically affect the phenomenology of preheating and should be the subject of future work. § ACKNOWLEDGEMENTS The authors want to thank Juan Pablo Beltran Almeida and Fredy Alexander Ochoa Perez for very fruitful discussions, as well as Marco Peloso, Essodjolo Kpatcha, and Jong-Hyun Yoon for helping to clarify issues. This project has received funding /support from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN, and the IN2P3 Master Projet UCMN. P.A. was supported by FWF Austrian Science Fund via the SAP P 36423-N. JHEP
http://arxiv.org/abs/2307.04674v1
20230710162020
Optimal Robot Path Planning In a Collaborative Human-Robot Team with Intermittent Human Availability
[ "Abhinav Dahiya", "Stephen L. Smith" ]
cs.RO
[ "cs.RO" ]
B-.05emi-.025em b-.08em T-.1667em.7exE-.125emX =4 undefined undefined 3pt definition definitionDefinition theoremTheorem propositionProposition problemProblem lemma[]Lemma *remarkRemark ="2D 5pt 2pt -7pt C>c<Optimal Robot Path Planning In a Collaborative Human-Robot Team with Intermittent Human Availability Abhinav Dahiya and Stephen L. Smith This research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC)Abhinav Dahiya and Stephen L. Smith are with Department of Electrical and Computer Engineering, University of Waterloo, Waterloo (mailto:[email protected]@uwaterloo.ca, mailto:[email protected]@uwaterloo.ca) =================================================================================================================================================================================================================================================================================================================================================================================================================== This paper presents a solution for the problem of optimal planning for a robot in a collaborative human-robot team, where the human supervisor is intermittently available to assist the robot in completing tasks more quickly. Specifically, we address the challenge of computing the fastest path between two configurations in an environment with time constraints on how long the robot can wait for assistance. To solve this problem, we propose a novel approach that utilizes the concepts of budget and critical departure times, which enables us to obtain optimal solution while scaling to larger problem instances than existing methods. We demonstrate the effectiveness of our approach by comparing it with several baseline algorithms on a city road network and analyzing the quality of the obtained solutions. Our work contributes to the field of robot planning by addressing a critical issue of incorporating human assistance and environmental restrictions, which has significant implications for real-world applications. § INTRODUCTION Robots have come a long way in the past decades, with increasing levels of autonomy transforming the way they operate in various domains, from factories and warehouses to homes and public spaces <cit.>. However, navigating dynamic environments effectively continues to be a formidable challenge. Despite the significant strides made in robot autonomy, human oversight remains vital in enhancing safety, efficiency or to comply with regulatory requirements. For example, a robot navigating through an urban environment must abide by traffic regulations and may require human assistance in busy or construction areas to ensure safety or expedite operations. Similarly, in an exploration task, robots may require replanning due to changes in the environment, while the supervisor has already committed to a supervision schedule for other robots and is only intermittently available. By considering the operator's availability and environmental restrictions, robots can plan their paths more efficiently, avoid unnecessary waiting and decide when to use human assistance. Figure <ref> shows the problem overview with an example of a robot navigating in a city. However, the presented problem can be generalized to any arbitrary task which can be completed via different sub-tasks defined using precedence and temporal constraints. We consider the problem of robot planning with the objective of finding the fastest path between two configurations. We demonstrate our approach through an example of robot navigation in an urban environment with intermittent operator availability, varying travel speeds, and waiting limits. Specifically, we consider a city road network where the robot can traverse through different locations either autonomously or with the assistance of a human supervisor, each taking different amounts of time. However, the supervisor is only available at certain times, and the robot has a limited amount of time to wait at a location before it must move on to its next destination. By formulating the problem in this way, we aim to address the challenge of collaborative robot planning in real-world environments where the availability of human supervisors may be limited and thus can affect the optimal route for the robot. In this paper, we present a method to compute the fastest path from one location to another while accounting for all these constraintsThe map shown in Fig. <ref> shows the road network of the city of Waterloo, generated using https://www.qgis.org/en/site/index.htmlQGIS, https://www.openstreetmap.org/about/OpenStreetMap and https://openrouteservice.org/OpenRouteService.. The problem of robot path planning with operator allocation in dynamic networks is inspired by real-world scenarios where the availability of human assistance and thus the robot's speed of travel and its ability to traverse certain paths can change over time, e.g., <cit.>. Traditional methods, such as time-dependent adaptations of the Dijkstra's algorithm, are not designed to handle situations in dynamic environments where waiting is limited, and the task durations may not follow the first-in-first-out (FIFO) property <cit.>. This means that a robot may arrive at its target location earlier by departing later from its previous location, for example, by using human assistance. To address these challenges, we draw on techniques from the time-dependent shortest path literature to solve the problem. Unfortunately, existing optimal solution techniques are severely limited by their computational runtime. In this paper, we propose a novel algorithm that is guaranteed to find optimal solution and runs orders of magnitude faster than existing solution techniques. Contributions: Our main contributions are as follows: 1) We propose a novel graph search algorithm for the collaborative planning problem with intermittent human availability. The algorithm operates by intelligently selecting the times of exploration and by combining ranges of arrival times into a single search node. 2) We provide the proof that the algorithm generates optimal solutions. 3) We demonstrate the effectiveness of our approach in a city road-network, and show that it outperforms existing approaches in terms of computational time and/or solution quality. § BACKGROUND AND RELATED WORK In this section, we discuss some relevant studies from the existing literature in the area of robot planning with human supervision/collaboration. We also look into how the presented problem can be solved using existing techniques from related fields. Planning with Human Collaboration: The problem of task allocation and path planning for robots operating in collaboration with humans has been studied extensively in recent years. Researchers have proposed various approaches, such as a data-driven approach for human-robot interaction modelling that identifies the moments when human intervention is needed <cit.>, and a probabilistic framework that develops a decision support system for the human supervisors, taking into account the uncertainty in the environment <cit.>. In the context of autonomous vehicles, studies have investigated cooperative merging of vehicles at highway ramps <cit.> and proposed a scheduling algorithm for multiple robots that jointly optimize task assignments and human supervision <cit.>. Task allocation is a common challenge in mixed human-robot teams across various applications, including manufacturing <cit.>, routing <cit.>, surveying <cit.>, and subterranean exploration <cit.>. In addition, the problem of computing the optimal path for a robot under time-varying human assistance bears similarity to queuing theory applications, such as optimal fidelity selection <cit.> and supervisory control of robots via a multi-server queue <cit.>. These studies provide insights into allocating assistance and path planning for robots in collaborative settings, but do not address our specific problem of computing the optimal path for a robot under bounded waiting and intermittent assistance availability. Additionally, our problem differs in that the robot can operate autonomously even when assistance is available, i.e., the collaboration is optional. Time-Dependent Shortest Paths: The presented problem is also related to time-dependent shortest path (TDSP) problems, which aim to find the minimum cost or minimum length paths in a graph with time-varying edge durations <cit.>. Existing solution approaches include planning in graphs with time-activated edges <cit.>, implementing modified A^*<cit.>, and finding shortest paths under different waiting restrictions <cit.>. Other studies have explored related problems such as computing optimal temporal walks under waiting constraints <cit.>, and minimizing path travel time with penalties or limits on waiting <cit.>. Many studies in TDSP literature have addressed the first-in-first-out (FIFO) graphs <cit.>, while others have explored waiting times in either completely restricted or unrestricted settings. However, the complexities arising from non-FIFO properties, bounded waiting and the need to make decisions on the mode of operation, i.e., autonomous or assisted, have not been fully addressed in the existing literature <cit.>. The most relevant solution technique that can be used to solve our problem is presented in <cit.>, which solves a TDSP problem where the objective is to minimize the path cost constrained by the maximum arrival time at the goal vertex. This method iteratively computes the minimum cost for all vertices for increasing time constraint value. A time-expanded graph search method <cit.> is another way of solving the presented problem by creating separate edges for autonomous and assisted modes. We discuss these two methods in more detail in Sec. <ref>. As we will see, the applicability of these solution techniques to our problem is limited due to their poor scalability for large time horizons and increasing graph size. § PROBLEM DEFINITION The problem can be defined as follows. We are given a directed graph G = (V,E), modelling the robot environment, where each edge e ∈ E has two travel times corresponding to the two modes of operation: an autonomous time τ(e,0) and an assisted time τ(e,1), with the assumption that τ(e,0) ≥τ(e,1). When starting to traverse an edge, the robot must select the mode of operation for the traversal that is used for the entire duration of the edge. While the autonomous mode is always available, the assisted mode can only be selected if the supervisor is available for the entire duration of the edge (under assisted mode). The supervisor's availability is represented by a binary function μ, with μ([t_1,t_2])=1 indicating availability during the time window [t_1,t_2], and 0 otherwise. Additionally, at each vertex v ∈ V, the robot can wait for a maximum duration of w_v ≥ 0 before starting to traverse an outgoing edge. The robot's objective is to determine how to travel from a start vertex to a goal vertex. This can be represented as an execution path𝒫, specified as a list of edges to traverse, the amount of waiting required at intermediate vertices and the mode of operation selected for each edge. The objective of this problem is to find an execution path (or simply path) from a start vertex s ∈ V to a goal vertex g ∈ V∖{s}, such that the arrival time at g is minimized. Given a set 𝒫̂ of all possible paths 𝒫 of arbitrary length n, such that 𝒫⟨(v_1,t_1,w_1,m_1), (v_2,t_2,w_2,m_2), …,(v_n,t_n,w_n,m_n)⟩, we can write the problem objective as follows: min_𝒫∈𝒫̂ t_n s.t. v_1 = s, v_n = g e_v_i v_i+1 ∈ E ∀ i∈[1, n-1] t_i+1 = t_i + w_i + τ(e_v_i v_i+1, m_i) ∀ i∈[1, n-1] w_i ≤w_v_i ∀ i∈[1,n-1] m_i = 1 ⇒μ([t_i + w_i, t_i+1])=1 ∀ i∈[1,n-1]. The first constraint ensures that the path starts at s and ends at g. The second constraint ensures that the topological path is valid in the graph. The third constraint ensures that the path does not violate travel duration requirements at any edge. Fourth constraint ensures that the waiting restrictions are met at each vertex. Finally, the fifth condition ensures that an edge can only be assisted if the operator is available at least until the next vertex is reached. To efficiently solve this problem, we must make three crucial decisions: selecting edges to travel, choosing the mode of operation, and determining the waiting time at each vertex. Our proposed method offers a novel approach to computing the optimal solution. However, before delving into the details of our solution, it is necessary to grasp the concept of budget and how new nodes are generated during the search process. § BUDGET AND NODE GENERATION Since the robot is allowed to wait (subject to the waiting limits), it is possible to delay the robot's arrival at a vertex by waiting at one or more of the preceding vertices. Moreover, the maximum amount of time by which the arrival can be delayed at a particular vertex depends on the path taken from the start to that vertex. Our key insight is that this information about the maximum delay can be used to efficiently solve the given problem by removing the need to examine the vertices at every possible arrival time. We achieve this by augmenting the search space into a higher dimension, using additional parameters with the vertices of the given graph. A node in our search is defined as a triplet (x,a_x,b_x), corresponding to a vertex x ∈ V, arrival time a_x ∈ℤ_≥ 0 and a budget b_x ∈ℤ_≥ 0. The budget here defines the maximum amount of time by which the arrival at the given vertex can be delayed. Thus, the notion of budget allows a single node (x,a_x,b_x) to represent a range of arrival times from [a_x, a_x+b_x] at vertex x. Therefore, the allowed departure time from this vertex lies in the interval [a_x, a_x+b_x+w_x]. §.§ Node Generation The proposed algorithm is similar to standard graph search algorithms, where we maintain a priority search queue, with nodes prioritized based on the earliest arrival time (plus any admissible heuristic). Nodes are then extracted from the queue, their neighbouring nodes are generated and are added to the queue based on their priority. Since in our search a node is defined by the vertex, arrival time and budget, we must determine these parameters for the newly generated nodes when exploring a given node. To characterize the set of nodes to be generated during the graph search in our proposed algorithm, we define the notion of direct reachability as follows. A node (y, a_y, b_y) is said to be directly reachable from a node (x,a_x,b_x) if x and y are connected by an edge, i.e., e_xy∈ E, and it is possible to achieve all arrivals times in [a_y, a_y+b_y] at y through edge e_xy for some departure time t_D ∈ [a_x, a_x+b_x+w_x] from x and some mode of travel. As an example, consider a node (x,10,2) with τ(e_xy,0) = 5 and w_x=3. Then the nodes (y, 15, 5), (y, 16, 4) and (y, 17, 3) are a few directly reachable nodes from (x,10,2) (corresponding to departure times 10, 11 and 12, respectively). Like standard graph search methods, our algorithm aims to generate all nodes directly reachable from the current node during the exploration process. One approach is to generate all directly reachable nodes from the given node (x,a_x,b_x) for all possible departure times in [a_x, a_x+b_x+w_x]. However, this results in redundancy when multiple nodes can be represented collectively using a single node with a suitable budget. As the operator availability changes, the possible arrival times at the next vertex may present themselves as separate blocks of time. A block of arrival times can be represented using a single node, and thus we only need to generate a new node for each arrival time block. To understand this, we consider the example given in Fig. <ref>, where a node (x,a_x,b_x) is being extracted from the queue, and we want to generate the nodes corresponding to a neighbouring vertex y. The arrival time range at x, [a_x, a_x+b_x] is shown as solid purple line. The possible departure window [a_x, a_x+b_x+w_x] is shown as purple dashed line. Under autonomous operation, the edge can be traversed by departing at any time in the departure window, resulting in possible arrival time at vertex y in the interval [a_x+α, a_x+b_x+w_x+α], shown as solid orange line. Therefore, we can represent these possible arrival times using the node n_1 = (y, a_x + α, b_x+w_x), where a_x + α is the earliest arrival time at y and b_x+w_x is the new budget. Note that the new budget is increased from the previous value by an amount of w_x. Under assisted operation, only a subset of departure window is feasible, as shown in Fig. <ref>(b). This results in separate blocks of arrival times at y, shown as solid green lines. The range of arrival times corresponding to these blocks become the budget values for the new nodes. In the given example, two nodes are generated: n_2 = (y, t_D2+β, Δ t_2) and n_3 = (y, t_D3+β, Δ t_3). Note that the feasible departure times are limited by operator availability and t_max. The value of t_max is the minimum of b_x + w_x and α - β. The former quantity limits the departure from x to a_x+b_x+w_x, while the latter comes from the observation that any departure time t_D > a_x + (α - β) will result in arrival at y at a time a_y > a_x + α, and a budget b_y < b_x + w_x. However, this arrival time range is already covered by the node generated under autonomous operation. Critical departure times: Note that the earliest arrival times for each of the three new nodes correspond to unique departure times from x (t_D1, t_D2, t_D3 in Fig. <ref>). We refer to these times as critical departure times, as exploring a node only at these times is sufficient to generate nodes that cover all possible arrival times at the next vertex. Since in the presented problem, the edge duration depends on the mode of operation selected, the set of critical departure times for a node is a subset of times when the operator availability changes, and thus can be efficiently determined. Next, we present how these concepts are used by our proposed Budget-A^* algorithm to solve the given problem. § BUDGET A^* ALGORITHM This section details the proposed Budget A^* algorithm and its three constituent functions: EXPLORE, REFINE and GET-PATH. To recall, a node in our search is defined as a tuple (x,a_x,b_x). A pseudo-code for the Budget-A^* algorithm is given in Alg. <ref>, and more details on the constituent functions follow. The algorithm initializes an empty priority queue Q, a processed set S and a predecessor function ψ. It then adds a node (s,0,0) to Q denoting an arrival time of exactly 0 at s. The algorithm iteratively extracts the node with the earliest arrival time (plus an admissible heuristic) from Q, adds it to S, and generates new candidate nodes for each of its neighbors using the EXPLORE function. The REFINE function then checks if these nodes can be added to the queue, removes redundant nodes from Q, and updates predecessor information. The algorithm continues until Q is empty or the goal vertex is reached. The GET-PATH function generates the required path using the predecessor data and waiting limits. §.§ Exploration The EXPLORE function takes in several input parameters: arrival time a_x, budget b_x, waiting limit w_x, edge e_xy, travel durations τ and operator availability μ. The function returns a set 𝒩 of candidate nodes of the form (y, a_i, b_i, m_i), where a_i,b_i,m_i are the arrival time, budget and the mode of operation respectively, corresponding to all critical departure times from the node (x, a_x, b_x) to vertex y. A pseudo-code is shown in Alg. <ref>. As discussed in Sec. <ref>, the autonomous mode generates one new node, while assisted mode can generate multiple nodes depending on operator availability, node budget and task duration. The function first adds a node (y,a_x+α, b_x+w_x) corresponding to the autonomous mode to 𝒩. For the assisted mode, it first computes the maximum useful delay in departure t_max. Next, it generates an ordered set ℱ of feasible departure times from the current node as the times in departure window when it's possible to depart under assisted mode, computed using μ and β (line <ref>). Lines <ref>-<ref> generate a new node (y, a_y, b_y) for each critical departure time t_d, with a budget b_y = 0 and arrival time a_y = t_d + β. The budget is then incremented for each consecutive departure time in ℱ. A gap in ℱ means a gap in arrival time at y indicating that we have considered the complete arrival time range for that critical departure time. This condition is checked in line <ref>, and the node (y,a_y, b_y, 1) is added to 𝒩. Once all departure times in ℱ are accounted for, the set 𝒩 contains all required arrival time and budget pairs (along with the mode of operation) for the given node (x,a_x,b_x) and neighbour y. §.§ Node Refinement The REFINE function determines which nodes to add or remove from the search queue, based on the newly generated nodes. The function checks if the new node is redundant by comparing its vertex and arrival time window with nodes already in the queue (Alg. <ref> line <ref>). If the new node is found to be redundant, the function returns the original queue without modifications. If not, the new node is added to the queue, and if there is any node in Q with the same vertex and an arrival time range subset of the new node's range (line <ref>), it is removed. The function then returns the updated queue and predecessor function. §.§ Path Generation To get the execution path from start to goal, we use the predecessor data stored in function ψ, which returns the predecessor node vertex and arrival time (x,a_x) along with the mode of travel m_x, used on the edge e_xy for a given vertex-time pair (y,a). However, we need to determine the exact arrival and departure times at each vertex based on wait limits. To achieve this, we use the GET-PATH function shown in Alg. <ref>. The function backtracks from the goal to the start, calculating the exact departure time from the predecessor vertex based on the earliest arrival time at the current vertex and the mode of operation (line <ref>). The exact arrival time is then determined using the departure times and the maximum allowed waiting w (lines <ref>-<ref>). The final path is stored as a list of tuples representing a vertex, the arrival time, waiting time, and mode of operation used. §.§ Correctness Proof Let a vertex-time pair (y, a_y) be called directly reachable from a node (x,a_x,b_x), if vertex y can be reached by departing vertex x between a_x and a_x+b_x+w_x through edge e_xy∈ E. Consider a node (x,a_x,b_x) extracted from Q (line <ref>), and a y ∈neighbors(x) inspected in lines <ref>-<ref>. If (y, a_y) is directly reachable from (x,a_x,b_x), then there is a node in Q with vertex y whose arrival time range contains a_y. For a given node, the critical departure times represent the number of separate arrival time blocks. Also, as discussed earlier, a single block of arrival times can be represented by a node having the earliest arrival time in that block and budget equal to the width of the block. The EXPLORE function gets called for each neighbour of x (Alg. <ref> line <ref>) and generates new nodes corresponding to each critical departure (Alg. <ref> lines <ref>-<ref>). Therefore the resulting nodes cover all possible arrival times at every neighbouring vertex of x when departing at a time in the range [a_x, a_x + b_x + w_x]. During the refinement step, only those nodes are removed for which the arrival time range is already covered by another node (Alg. <ref> line <ref>). Therefore, after execution of the EXPLORE and REFINE functions, there exist nodes for all achievable arrival times at the neighboring vertices corresponding to the node (x,a_x,b_x). When a node (x,a_x,b_x) is extracted from Q, for every achievable arrival time a' < a_x at x (through any path from the start vertex), there exists at least one node with vertex x in the explored set S for which the arrival time range includes a'. We will use proof by induction. Base case: Consider the starting node (s,0,0) (first node extracted from Q). Since it has an arrival time of 0, and arrival times are non-negative there is no earlier achievable arrival time at vertex s, so the statement is true. Induction step: Assume the statement is true for the first k nodes extracted and added to S. We want to show that it is also true for the next node (x,a_x,b_x) extracted from Q. We will prove this by contradiction. Suppose there exists an achievable arrival time a'<a_x at x such that no node of vertex x in S has an arrival time range that includes a'. Let (x,a') is achieved via some path[Here, only the vertex-time pairs are used to denote a path. Wait times and mode of travel are omitted for simplicity.](s,0) (u,a_u) → (v,a_v) (x,a'), where (u,a_u) and (v,a_v) are two consecutive entries in the path. Let (v,a_v) be the first pair in the path for which a node enclosing arrival time a_v is not present in S. This can also be (x,a') itself. Since (v,a_v) is directly reachable from (u, a_u), when exploring the node corresponding to (u,a_u), a node corresponding to arrival time a_v at v must have been inserted (or already present) in Q (Lemma <ref>). Let this node be (v,a'_v, b'_v). We have a_v ∈ [a'_v, a'_v + b'_v]. Since b'_v ≥ 0, we get a'_v ≤ a_v. Also, a_v ≤ a' because (v,a_v) and (x,a') lie on a valid path. Since we assumed a' < a_x, we get a'_v < a_x. However, since (x, a_x, b_x) is extracted from Q first, we must have a_x ≤ a'_v. Therefore, the initial assumption must be incorrect, and the statement holds for any node extracted from Q. Consider a vertex x, and let (x,a_x,b_x) be the first node with vertex x that is extracted from Q. Then a is the earliest achievable arrival time at x. By Lemma <ref>, if there exists an arrival time a' < a_x at x which is achievable through any path from the start, a corresponding node must be in the explored set. Since (x,a_x,b_x) is the first node with vertex x that is extracted from Q, there is no node with vertex x in the explored set S. Therefore, a is the earliest achievable arrival time at x. § SIMULATIONS AND RESULTS In this section, we present the simulation setup and discuss the performance of different solution methods. §.§ Baseline Algorithms In this section, we present some solution approaches that we use to compare against the proposed Budget-A^* algorithm. §.§.§ TCSP-CWT The TCSP-CWT algorithm (Time-varying Constrained Shortest Path with Constrained Waiting Times), presented in <cit.>, solves the shortest path problem under the constraint of a bounded total travel time. To solve the given problem, we modify the original graph by creating two copies of each vertex, one for autonomous mode and another for assisted mode. New edges are added accordingly. The search is stopped at the first time step with a finite arrival time at the goal vertex. §.§.§ Time-expanded A^* The Time-expanded A^* algorithm is a modified version of the A^* algorithm that can be used to solve the given problem <cit.>. It creates a separate node for each vertex at each time step, and adds new edges based on the waiting limits and operator availability. §.§.§ Greedy (Fastest Mode) Method One efficient method for obtaining a solution is to combine a time-dependent greedy selection with a static graph search method. This approach is similar to an A^* search on a static graph, but takes into account the arrival time at each vertex while exploring it. To determine the edge duration to the neighboring vertices, we consider the faster of the two alternatives: traversing the edge immediately under autonomous mode or waiting for the operator to become available. Once the goal vertex is extracted from the priority queue, we can stop the search and use the predecessor data to obtain the path. §.§ Problem instance generation For generating the problem instances, we use the map of the city of Waterloo, Ontario, Canada (a 10km × 10km area around the city centre). Using the open source tools QGIS and OpenStreetMap, we place a given number of points at different intersections and landmarks. These points serve as vertices in our graph. Next, we use Delaunay triangulation to connect these vertices and use OpenRouteService (ORS) to compute the shortest driving distance between these vertices. An example graph of the city is shown in Figure <ref>. To obtain the travel durations at each edge, we first sample robot speeds from a uniform random distribution. The travel durations under the two modes are then computed by dividing the edge length (computed using ORS) by the speed values and rounding off to the nearest integer. The travel speeds are sampled as follows: autonomous speed u^0_xy∼ U[0,40]; assisted speed u^1_xy∼ U[10 + u^0_xy,30 + u^0_xy]. The maximum waiting duration at each vertex x is sampled from a uniform random distribution as w_x ∼ U[0,15]. The operator availability function is generated by randomly sampling periods of availability and unavailability, with durations of each period sampled from the range of [10, 200]. The distance values used in our simulations are in meters, times are in minutes and speeds are in meters/minute. We test the algorithms using the Waterloo city map with varying vertex density, by selecting 64, 100 or 225 vertices to be placed in the map. We generate 20 problem instances for each density level (varying speeds, waiting limits and operator availability), and for each instance, we solve the problem for 100 randomly selected pairs of start and goal vertices. The algorithms are compared based on solution time and the number of explored nodes. We also examine some of the solutions provided by the greedy method. Note on implementation: All three graph search algorithms (Budget-A^*, Greedy and Time-expanded A^*) use the same heuristic, obtained by solving a problem instance under the assumption that operator is always available. This heuristic is admissible in a time-dependent graph <cit.> and can be computed efficiently. The priority queues used in all methods are implemented as binary heaps, allowing for efficient insertion, extraction and search operations. Additionally, all the methods require computation of the feasibility set (Alg. <ref> line <ref>). This is pre-computed for all departure times and is given as input to each algorithm. §.§ Results Figure <ref> compares the performance of the Budget-A^* algorithm with that of the Greedy algorithm in terms of durations of the generated paths. From the figure, we observe that the Greedy algorithm is able to generate optimal or close-to-optimal solutions for a large proportion of the tested problem instances. However, for many instances, the path generated by the greedy approach is much longer than that produced by the Budget-A^* algorithm, reaching up to twice the duration. To gain further insight into our results, we present Fig. <ref>, highlighting example instances where the greedy approach fails to generate an optimal solution. Through these examples, we demonstrate how our algorithm makes effective decisions regarding path selection, preemptive waiting, and not utilizing assistance to delay arrival at a later vertex. These decisions ultimately result in improved arrival time at the goal. Figure <ref> compares the computation time required by different solution methods for varying number of vertices and the duration of the optimal path between the start and goal vertices. The plots demonstrate that the proposed algorithm consistently outperforms the other optimal methods in terms of computation time, with the greedy method being the fastest but providing suboptimal solutions. The computation time for all methods increases with the number of vertices. The path duration has the greatest impact on the performance of the TCSP-CWT algorithm, followed by the Time-expanded A^*, the Budget-A^* algorithm, and finally the Greedy algorithm. Figure <ref> compares the number of nodes generated and explored by Time-expanded A^*, Budget A^*, and Greedy search algorithms. The number of nodes is a key metric to evaluate search efficiency as it reflects the number of insertions and extractions from the priority queue. The Time-expanded A^* generates nodes at a faster rate with increasing vertices, while the proposed algorithm generates an order of magnitude fewer nodes, indicating better efficiency and scalability. The Greedy search algorithm terminates after exploring the least number of nodes, indicating that it sacrifices optimality for speed. In contrast, both the Time-expanded A^* and Budget A^* algorithms guarantee optimality in their search results. § CONCLUSION In this paper, we introduced Budget-A^*, a new algorithm to tackle the problem of collaborative robot planning with bounded waiting constraints and intermittent human availability. Our approach computes the optimal execution path, which specifies which path should the robot take, how much to wait at each location and when to use human assistance. Our simulations on a city road network demonstrate that Budget-A^* outperforms existing optimal methods, in terms of both computation time and number of nodes explored. Furthermore, we note that the greedy method performs well for the majority of test cases, which could potentially be utilized to further improve efficiency of the proposed algorithm. For future research, the Budget-A^* algorithm can be extended to handle more complex constraints such as multiple types of human assistance, non-stationary operator availability, and dynamic task requirements. Our approach can be further optimized to handle even larger networks by incorporating better heuristics and pruning techniques. Finally, our algorithm can be adapted to other applications such as emergency response in unknown environments, where fast and online task planning is crucial. Our approach has significant implications for real-world applications like transportation systems, logistics, and scheduling, where time constraints and limited human supervision are crucial. We believe our work will inspire further research in these areas and lead to the development of more efficient algorithms for enabling human supervision under real-world restrictions. 10 url@rmstyleroyakkers2015literature L. Royakkers and R. van Est, “A literature review on new robotics: automation from love to war,”International journal of social robotics, vol. 7, pp. 549–570, 2015. mintrom2022robots M. Mintrom, S. Sumartojo, D. Kulić, L. Tian, P. Carreno-Medrano, and A. Allen, “Robots in public spaces: Implications for policy design,”Policy Design and Practice, vol. 5, no. 2, pp. 123–139, 2022. dahiya2023survey A. Dahiya, A. M. Aroyo, K. Dautenhahn, and S. L. Smith, “A survey of multi-agent human–robot interaction systems,”Robotics and Autonomous Systems, vol. 161, p. 104335, 2023. riley2021assessment D. G. Riley and E. W. Frew, “Assessment of coordinated heterogeneous exploration of complex environments,” in IEEE Conference on Control Technology and Applications (CCTA), 2021, pp. 138–143. wang2019time Y. Wang, Y. Yuan, Y. Ma, and G. Wang, “Time-dependent graphs: Definitions, applications, and algorithms,”Data Science and Engineering, vol. 4, pp. 352–366, 2019. swamy2020scaled G. Swamy, S. Reddy, S. Levine, and A. D. Dragan, “Scaled autonomy: Enabling human operators to control robot fleets,” in IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 5942–5948. dahiya2022scalable A. Dahiya, N. Akbarzadeh, A. Mahajan, and S. L. Smith, “Scalable operator allocation for multirobot assistance: A restless bandit approach,”IEEE Transactions on Control of Network Systems, vol. 9, no. 3, pp. 1397–1408, 2022. hickert2021cooperation C. Hickert, S. Li, and C. Wu, “Cooperation for scalable supervision of autonomy in mixed traffic,”arXiv preprint arXiv:2112.07569, 2021. cai2022scheduling Y. Cai, A. Dahiya, N. Wilde, and S. L. Smith, “Scheduling operator assistance for shared autonomy in multi-robot teams,” in IEEE Conference on Decision and Control (CDC), 2022, pp. 3997–4003. fusaro2021integrated F. Fusaro, E. Lamon, E. De Momi, and A. Ajoudani, “An integrated dynamic method for allocating roles and planning tasks for mixed human-robot teams,” in IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 2021, pp. 534–539. hari2020approximation S. K. K. Hari, A. Nayak, and S. Rathinam, “An approximation algorithm for a task allocation, sequencing and scheduling problem involving a human-robot team,”Robotics and Automation Letters, vol. 5, no. 2, pp. 2146–2153, 2020. mau2007scheduling S. Mau and J. Dolan, “Scheduling for humans in multirobot supervisory control,” in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. 1em plus 0.5em minus 0.4em IEEE, 2007, pp. 1637–1643. gupta2019optimal P. Gupta and V. Srivastava, “Optimal fidelity selection for human-in-the-loop queues using semi-markov decision processes,” in 2019 American Control Conference (ACC), 2019, pp. 5266–5271. powel2012multiserver N. D. Powel and K. A. Morgansen, “Multiserver queueing for supervisory control of autonomous vehicles,” in 2012 American Control Conference (ACC). 1em plus 0.5em minus 0.4em IEEE, 2012, pp. 3179–3185. dean2004algorithms B. C. Dean, “Algorithms for minimum-cost paths in time-dependent networks with waiting policies,”Networks: An International Journal, vol. 44, no. 1, pp. 41–46, 2004. zhao2008algorithm L. Zhao, T. Ohshima, and H. Nagamochi, “A* algorithm for the time-dependent shortest path problem,” in WAAC08: The 11th Japan-Korea Joint Workshop on Algorithms and Computation, vol. 10, 2008. orda1990shortest A. Orda and R. Rom, “Shortest-path and minimum-delay algorithms in networks with time-dependent edge-length,”Journal of the ACM (JACM), vol. 37, no. 3, pp. 607–625, 1990. ding2008finding B. Ding, J. X. Yu, and L. Qin, “Finding time-dependent shortest paths over large graphs,” in International Conference on Extending Database Technology: Advances in Database Technology, 2008, pp. 205–216. bentert2020efficient M. Bentert, A.-S. Himmel, A. Nichterlein, and R. Niedermeier, “Efficient computation of optimal temporal walks under waiting-time constraints,”Applied Network Science, vol. 5, no. 1, pp. 1–26, 2020. he2021time E. He, N. Boland, G. Nemhauser, and M. Savelsbergh, “Time-dependent shortest path problems with penalties and limits on waiting,”INFORMS Journal on Computing, vol. 33, no. 3, pp. 997–1014, 2021. foschini2011complexity L. Foschini, J. Hershberger, and S. Suri, “On the complexity of time-dependent shortest paths,” in ACM-SIAM Symposium on Discrete Algorithms (SODA), 2011, pp. 327–341. cai1997time X. Cai, T. Kloks, and C.-K. Wong, “Time-varying shortest path problems with constraints,”Networks: An International Journal, vol. 29, no. 3, pp. 141–150, 1997. ford1958constructing L. R. Ford Jr and D. R. Fulkerson, “Constructing maximal dynamic flows from static flows,”Operations research, vol. 6, no. 3, pp. 419–433, 1958.
http://arxiv.org/abs/2307.07523v1
20230710110551
PapagAI:Automated Feedback for Reflective Essays
[ "Veronika Solopova", "Adrian Gruszczynski", "Eiad Rostom", "Fritz Cremer", "Sascha Witte", "Chengming Zhang", "Fernando Ramos López Lea Plößl", "Florian Hofmann", "Ralf Romeike", "Michaela Gläser-Zikuda", "Christoph Benzmüller", "Tim Landgraf" ]
cs.AI
[ "cs.AI", "cs.CL" ]
PapagAI: Automated Feedback for Reflective Essays V. Solopova et al. Freie Universität Berlin, Germany Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany Otto-Friedrich-Universität Bamberg, Germany PapagAI: Automated Feedback for Reflective Essays Veronika Solopova10000-0003-0183-9433 Eiad Rostom1 Fritz Cremer1Adrian Gruszczynski1Sascha Witte1Chengming Zhang20009-0007-8695-5455Fernando Ramos López1 Lea Plößl20009-0004-7290-5068Florian Hofmann2Ralf Romeike10000-0002-2941-4288Michaela Gläser-Zikuda20000-0002-3071-2995 Christoph Benzmüller2,30000-0002-3392-3093 Tim Landgraf10000-0003-4951-5235 August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================= Written reflective practice is a regular exercise pre-service teachers perform during their higher education. Usually, their lecturers are expected to provide individual feedback, which can be a challenging task to perform on a regular basis. In this paper, we present the first open-source automated feedback tool based on didactic theory and implemented as a hybrid AI system. We describe the components and discuss the advantages and disadvantages of our system compared to the state-of-art generative large language models. The main objective of our work is to enable better learning outcomes for students and to complement the teaching activities of lecturers. § INTRODUCTION Dropout rates as high as 83% among pre-service teachers and associated teacher shortages are challenging the German education system <cit.>. This may be due to learning environments not adequately supporting prospective teachers in their learning process <cit.>. Written reflective practice may alleviate the problem: By reflecting on what has been learned and what could be done differently in the future, individuals can identify areas for improvement. However, instructors may be overburdened by giving feedback to 200+ students on a weekly basis. With the rise of large language models (LLMs, <cit.>), automated feedback may provide welcome relief. Students could iteratively improve their reflection based on the assessment of a specialized model and through that, their study performance. Instructors could supervise this process and invest the time saved in improving the curriculum. While current research is seeking solutions to align the responses of LLMs with a given set of rules, it is currently impossible to guarantee an output of a purely learnt model to be correct. Here, we propose “PapagAI", a platform to write reflections and receive feedback from peers, instructors and a specialized chatbot. PapagAI uses a combination of ML and symbolic components, an approach known as hybrid AI <cit.>. Our architecture is based on various natural language understanding modules[All ML models are available in our OSF depository (https://osf.io/ytesn/), while linguistic processing code can be shared upon request.], which serve to create a text and user profile, according to which a rule-based reasoner chooses the appropriate instructions. § RELATED WORK PapagAI employs a number of models for detecting topics contained in -, and assessing the quality and depth of the reflection, as well as for detecting the sentiment and emotions of the author. While extensive previous work was published on each of these tasks, implementations in German are rare. To our knowledge, there is no previous work that combined all in one application. Automated detection of reflective sentences and components in a didactic context has been described previously <cit.>. In <cit.>, e.g., the authors analyse the depth of a reflection on the text level according to a three-level scheme (none, shallow, deep). Document-level prediction, however, can only provide coarse-grained feedback. Liu et al. <cit.>, in contrast, also use three levels for predicting reflective depth for each sentence. In emotion detection, all previous works focus on a small set of 4 to 6 basic emotions. In Jena <cit.>, e.g., the author describes detecting students' emotions in a collaborative learning environment. Batbaatar et al. <cit.> describes an emotion model achieving an F1 score of 0.95 for the six basic emotions scheme proposed by Ekman <cit.>. Chiorrini et al. <cit.> use a pre-trained BERT to detect four basic emotions and their intensity from tweets, achieving an F1 score of 0.91. We did not find published work on the German language, except for Cevher et al. <cit.>, who focused on newspaper headlines. With regard to sentiment polarity, several annotated corpora were developed for German <cit.>, mainly containing tweets. Guhr et al. <cit.> use these corpora to fine-tune a BERT model. Shashkov et el. <cit.> employ sentiment analysis and topic modelling to relate student sentiment to particular topics in English. Identifying topics in reflective student writing is studied by Chen et al. <cit.> using the MALLET toolkit <cit.> and by De Lin et al. <cit.> with Word2Vec + K-Means clustering. The techniques in these studies are less robust than the current state-of-art, such as ParlBERT-Topic-German <cit.> and Bertopic <cit.>. Overall, published work on automated feedback to student reflections is scarce, the closest and most accomplished work being AcaWriter <cit.> and works by Liu and Shum <cit.>. They use linguistic techniques to identify sentences that communicate a specific rhetorical function. They also implement a 5-level reflection depth scheme and extract parts of text describing the context, challenge and change. The feedback guides the students to the next level of reflective depth with a limited number of questions. In their user study, 85.7% of students perceived the tool positively. However, the impact on the reflection quality over time was not measured and remains unclear. § METHODS, COMPONENTS AND PERFORMANCES Data collection. Our data comes from the German Reflective Corpus <cit.>. The dataset contains reflective essays collected via google-forms from computer science and ethics of AI students in German, as well as e-portfolio diaries describing school placements of teacher trainees from Dundee University. For such tasks as reflective level identification and topic modelling, we enlarged it by computer science education students' essays and pedagogy students' reflections[This still non-published data can be obtained upon request.]. It consists of reflections written by computer science, computer science education, didactics and ethics of AI students in German and English. Data is highly varied, as didactics students write longer and deeper reflections than e.g. their computer science peers. Emotions detection. Setting out from the Plutchik wheel of basic emotions <cit.>, during the annotation process we realised that many of the basic emotions are never used, while other states are relevant to our data and the educational context (e.g. confidence, motivation). We framed it as a multi-label classification problem at the sentence level. We annotated 6543 sentences with 4 annotators. The final number of labels is 17 emotions, with the 18th label being 'no-emotion'. We calculated the loss using binary cross entropy, where each label is treated as a binary classification problem, the loss is calculated for each label independently, which we sum for the total loss. We achieved the best results with a pre-trained RoBERTa <cit.> , with a micro F1 of 0.70 and a hamming score of 0.67 across all emotion labels. The model achieved the highest scores for “surprise”, “approval” and “interest”. With a lenient hamming score, accounting for the model choosing similar emotions (e.g. disappointment instead of disapproval) our model achieves up to 0.73. Gibbs cycle. <cit.> illustrates cognitive stages needed for optimal reflective results. It includes 6 phases: description, feelings, evaluation, analysis, conclusion and future plans. We annotated the highest phase present in a sentence and all the phases present. We treated this as a multi-class classification problem and used a pre-trained ELECTRA model. While evaluating, we compared one-hot prediction to the highest phase present and 3 top probability classes with all the phases present. While one-hot matching only managed to score 65% F1 macro, the top 3 predictions achieve up to 98% F1 macro and micro. Reflective level detection. Under the supervision of Didactics specialists two annotators labelled 600 texts according to Fleck & Fitzpatrick's scheme <cit.>, achieving moderate inter-annotators agreement of 0.68. The coding scheme includes 5 levels: description, reflective description, dialogical reflection, transformative reflection and critical reflection; With 70% of the data used for the training and 30% for evaluation, we used pre-trained BERT large and complete document embeddings for the English and German, resulting in QWK score of 0.71 in cross-validation. Topic modelling. We used BERTopic <cit.> on the sentence level. First, we tokenized and normalize the input sequence to lowercase and filter out numbers, punctuation, and stop-words using nltk library <cit.>. Then, we extract embeddings with BERT, reduce dimensionalities with UMAP, cluster reduced embeddings with HDBSCAN, create topic representation with tfidf and fine-tune topic representations with the BERT model. Because we have a lot of data of different origins, we created two clusterings, one more specific to the pedagogy topic and one including various educational topics. You can see our clusters in App. Linguistic scoring. Using spacy[https://spacy.io] we tokenized, and lemmatize the sentences, extracted dependencies parcing and part of speech. Additionally, we used RFTagger<cit.> for parts of speech and types of verbs. We extract sentence length, adverb for verb ratio, adjective for noun ratio, number of simple and complex sentences, types of subordinated clauses and number of discourse connectors[We use Connective-Lex list for German: https://doi.org/10.4000/discours.10098.] used. This information enables us to determine the reflection length, expressivity and variability of the language, as well as surface coherence and structure. § SYSTEM ARCHITECTURE In PapagAI (see Fig. <ref>) the input text of the reflection is received from the AWS server through a WebSocket listener script. To minimize the response time, the models are loaded in the listener script once and then the user request spawn threads with the models already loaded. If the input text is smaller than three sentences and contains forbidden sequences, the processing does not start and the user receives a request to revise their input. Otherwise, the text is segmented into sentences and tokens. The language is identified using langid <cit.> and if the text is not in German, it is translated using Google translator API implementation.[https://pypi.org/project/deep-translator/] The reflective level model receives the whole text, while other models are fed with the segmented sentences. Topic modelling and Gibbs cycle results are mapped, to identify if topics were well reflected upon. If more than three sentences are allocated to the topic and these sentences were identified by the Gibbs cycle model as analysis, we consider the topics well thought through. The extracted features are then passed to the feedback module. Here, the lacking and under-represented elements are identified in linguistic features and the three least present Gibbs cycle stages. If sentiment and emotions are all positive we conclude that no potential challenges and problems are thought through. If the sentiment and emotions are all negative, we want to induce optimism. These features together with the reflective level are mapped to the database of potential prompts and questions, where one of the suitable feedback options is chosen randomly for the sake of variability. Using manually corrected Gpt-3 outputs, for each prompt we created variations so that the feedback does not repeat often even if the same prompts are required.The extracted textual prompts are built together in a rule-based way into the template, prepared for German, Spanish and English. Otherwise, the overall feedback is made in German and then translated into the input language. The textual and a vector of extracted features for visual representation are sent back to the AWS server. The whole processing takes from 15 to 30 seconds based on the length of the text. Sample feedback can be seen in Figure <ref>. § COMPARISON WITH GPT-3 We compared our emotions detection (fine-tuned RoBERTa) and Gibbs cycle model (fine-tuned Electra) with the prompt-engineered state-of-the-art generative model Davinci <cit.> on the same task. For the evaluation and comparison, we used a small subset of 262 samples which were not part of the training. We first tried the zero-shot approach, where we described our labels to GPT-3 and gave it our sentence to predict. Then, we tried a one-shot approach, providing GPT-3 with one example sentence for each label. Finally, in the few-shot approach, we provided GPT-3 with three examples per label, which is the maximum number of examples possible due to the input sequence length restriction. Although the task requested GPT-3 to pick multiple labels out of the possible options, the model predicted multiple labels only in 5% of the cases for emotions. For this reason, we used the custom defined “one correct label”: the score considers the prediction correct if it contains at least one correct label from the sentence's true labels. The zero-shot approach achieved only 0.28 accuracy in predicting one correct label for emotions. The model predicted the labels “information”, “uncertainty”, “interest”, and “motivated” for the majority of the sentences. With the Gibbs cycle task, it achieved 80% correct predictions. Providing one example per label improved the performance noticeably by 18% (0.46) for emotions, and the model was able to detect emotions like “confidence”, “challenged”, and “approval” more accurately. It did not influence Gibb's cycle performance. Increasing the number of examples to three resulted in a slight improvement of 3% (0.49) for emotions, and 7% (0.87) for the Gibbs cycle. However, the best-scoring approaches did not offer a comparable performance to our fine-tuned models on these specific tasks with 0.81 on the same custom metric for emotion detection and 0.98 for the Gibbs cycle. § DISCUSSION AND CONCLUSION The current PapagAI system has several advantages in comparison to generative LLMs. It ensures transparency of the evaluation and control over the output, which is based exclusively on didactic theory. Although LLMs show huge promise, they are still prone to hallucination <cit.>, and, as we have shown in <ref>, they may under-perform on difficult cognitive tasks in comparison to smaller language models fine-tuned for the task. The fine-tuning of LLMs to didactic books and instructions, which we plan for our future work, still does not guarantee 100% theoretical soundness of the output, which is problematic e.g. in the case of pre-service students with statistically low AI acceptance. At the same time, the newest models, such as GPT-4, are only available through APIs, which raises concerns about data privacy, especially as the data in focus is an intimate reflective diary. Moreover, current open-source models, such as GPT-J and GPT-2, especially for languages other than English do not draw comparable results. Our architecture has, however, obvious drawbacks. On the one hand, our models do not reach 100% accuracy and this can naturally lead to suboptimal feedback. The processing time for many models, especially for longer texts, can be significantly higher than for a single generative LLM. For now, as we provide one feedback message for one rather long reflection, this is not a big issue, however, if we implement a dialogue form, the time of response would not feel natural. Finally, the variability of output using our approach is much more limited in comparison to generative models. We try to address it by creating many similar versions of instructions rephrased by GPT-3, and corrected manually. On average 7 out of 10 prompts needed some correction. Most of the errors were related to GPT-3 trying to rephrase the given sentence using synonyms that were not didactically appropriate in the given context. Future work, among others, will focus on user studies to understand how we can optimize the feedback, so that the users find it credible and useful, while their reflective skills advance. We also plan a more detailed evaluation based on more user data. We hope that our work will contribute to the optimization of the pre-service teachers' reflective practice and self-guided learning experience. splncs04 § APPENDIXES
http://arxiv.org/abs/2307.03878v2
20230708014803
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump
[ "Francesco Capozzi", "Bhaskar Dutta", "Gajendra Gurung", "Wooyoung Jang", "Ian M. Shoemaker", "Adrian Thompson", "Jaehoon Yu" ]
hep-ph
[ "hep-ph", "hep-ex" ]
apsrev4-1 Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, 67100 L’Aquila, Italy Istituto Nazionale di Fisica Nucleare (INFN), Laboratori Nazionali del Gran Sasso, 67100 Assergi (AQ), Italy Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Center for Neutrino Physics, Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A&M University, College Station, TX 77845, USA Department of Physics, University of Texas, Arlington, TX 76019, USA Beam dumps and fixed-target experiments have been very sensitive probes of such particles and other physics beyond the Standard Model (BSM) by considering the production of new states from the primary interaction in the beam dump. In a proton beam dump, there are many secondary interactions taking place in electromagnetic showers which may be additional production channels for pseudoscalar bosons or axion-like particles (ALPs). The target-less configuration of the MiniBooNE experiment, which collected data from 1.86 × 10^20 protons impinging directly on the steel beam dump, is an excellent test of sensitivity to these production channels of ALPs in the MeV mass region. Using the null observation of the MiniBooNE dump mode data, we set new constraints on ALPs coupling to electrons and photons produced through a multitude of channels and detected via both scattering and decays in the MiniBooNE detector volume. We find that the null result rules out parameter space that was previously unconstrained by laboratory probes in the 10-100 MeV mass regime for both electron and photon couplings. Lastly, we make the case for performing a dedicated analysis with 1.25× 10^20 POT of data collected by the ArgoNeuT experiment, which we show to have complementary sensitivity and set the stage for future searches. MI-HET-808 New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump Jaehoon Yu ============================================================================================== § INTRODUCTION Particle beam dumps have proven to be ultra-sensitive probes of new physics sectors beyond the Standard Model (BSM), where the myriad electromagnetic and hadronic cascades produce showers of electrons, positrons, gamma rays, and mesons; each a potential channel for BSM particle production. Studying the beam target environment and the particle showers within is thus a crucial first step to understanding what kind of physics is possible, and at what energy scales. Already many searches have been performed by electron beam dumps (E137, NA64, E141, Orsay, E774, etc. <cit.>) and proton beam dumps at the GeV energy scale (e.g. CHARM, NuCal, NA62, SeaQuest/SpinQuest <cit.>) and sub-GeV sources (e.g. CCM <cit.>, IsoDAR <cit.>, and COHERENT <cit.>), and others <cit.>. The existence of pseudoscalar bosons with small couplings to the SM are predicted in models of broken symmetries in connection with explaining many puzzles in nature. Axions and axion-like particles (ALPs) are central features in the landscape of solutions, in particular, to the strong CP problem <cit.> and to the dark matter problem <cit.>, and otherwise appear ubiquitously in string theory <cit.>, and the ultraviolet spectra of many other puzzle-solving models with spontaneously broken symmetries. In many of these scenarios, it is possible that the ALP has couplings to SM leptons and the electromagnetic field, making the particle showers inside the beam target good laboratory probes of ALPs, reaching up to GeV mass scales. ALPs at the MeV to GeV mass scales are of particular interest to beam dump and fixed target experiments and have been studied in the context of heavy axions <cit.>, whose parameter space extends beyond that of traditional QCD axion models. In 2018 MiniBooNE collaboration performed an analysis of their targetless-mode run <cit.>, in which they collected data associated with 1.86 × 10^20 protons on target (POT) bypassing the main beryllium target and impinging on the steel beam dump. Expected neutrino rates for this mode were very low, and no excess of events was observed, in contrast to the results from the target-mode runs <cit.>. In this work, we show that the null result from this data set is sensitive enough to ALPs produced in electromagnetic showers in the dump to set new limits on photon and electron couplings. Running in a target-less mode has the effect of suppressing the fluxes of neutrinos coming from charged meson decays. Searches for BSM particles that have production channels orthogonal to the charged pion decay gain a big advantage here; in the case of a thin target, the charged mesons decay in flight after getting produced, allowing them to be focused by the magnetic horn system. In the thick beam dump case, however, the charged pions are stopped in the material and decay isotropically, suppressing the subsequent neutrino background that would lie in the signal region for the BSM search. This realization is especially important for future beam dump experiments at higher energies, where the higher intensity of electromagnetic cascades provide both the coupling and mass reach necessary to significantly extend the limits tested so far by laboratory searches in the MeV to GeV mass range. We will show that data collected by the ArgoNeuT detector <cit.> already has this capability, and depending on the specific sensitivity of a dedicated analysis, null observations in this data could already rule out parameter space unconstrained by laboratory probes to-date. In  <ref> we outline the production and detection channels we consider for electromagnetically-coupled ALPs. In  <ref> we describe the statistical analysis performed for the MiniBooNE dump-mode data and the ArgoNeuT data given an ALP signal hypothesis, with the resulting limits placed on the parameter space of photon and electron couplings in  <ref>. Finally we conclude in  <ref>. § BSM PRODUCTION AND DETECTION IN A BEAM DUMP We consider primarily ALPs produced in electromagnetic cascades inside the beam dump or beam target environment, e.g., those that get produced from couplings to photons and to electrons; ℒ_ALP⊃ i g_ae a ψ̅_e γ^5 ψ_e - 1/4 g_aγ a F_μνF^μν This Lagrangian, which for simplicity we will assume only one tree-level coupling active or dominant at a time, opens up a slew of production and detection channels available to beam target and beam dump experiments. These have recently been investigated in refs. <cit.>, and we summarize them in Table <ref>. For ALPs coupled to electrons, the dominant final state will be e^+ e^- pairs appearing in the detector as single Cherenkov rings, either from the pair being highly collinear with a separating angle less than the typical angular resolution of the detector or if one of the electrons/positrons are too soft. This final state appears mainly through decays for m_a > 2 m_e and otherwise through the Bethe-Heitler lepton pair production process (a Z → e^+ e^- Z) for sub-MeV ALPs, considered before to set limits on light (pseudo)scalars appearing in a proton beam target <cit.>. The cross-section for this process was computed in refs. <cit.> using the formalism and atomic form factors presented in ref. <cit.>, and it is larger than inverse-Compton scattering (a e^- →γ e^-) by up to an order of magnitude for ALP energies in the 100 MeV - 1 GeV range, which is the energy region of interest for this study. The resonant cross section in the electron rest frame is σ = 2 π m_e g_ae^2 sm_a^2 √(s(s-4m_e^2))δ(E_+ - (m_a^2/2m_e - m_e)) ≃2π m_e g_ae^2m_a^2δ(E_+ - (m_a^2/2m_e - m_e)). To simulate the production fluxes, we first generate the SM particle fluxes inside the MiniBooNE dump with GEANT4 using the physics list, then pass a high-statistics sample of each particle flux (e^±, γ, π^±) into the event generator.[https://github.com/athompson-git/alplibhttps://github.com/athompson-git/alplib] The positron and electron fluxes are shown in Fig. <ref>, while the photon flux is shown in Fig. <ref>. We show a large phase space of the e^± and γ fluxes to illustrate the many low-energy features that come about from processes like nuclear de-excitation and beta decay. However, in principle, only the high energy tail (>75 MeV) in the forward-going region (θ≲ 10^-2 rad) is responsible for the bulk of BSM particle production that is captured within the signal region and pointing within the solid angle of the MiniBooNE detector. This is illustrated in Fig. <ref> where we show the energy spectra before and after an angular cut of 10 mrad. Further details of the event selection and signal window are discussed in the following section. For ALPs produced from electrons or positrons in resonant production (e^+ e^- → a), associated production (e^+ e^- → a γ), or bremsstrahlung (e^± Z → e^± Z a), the energy loss of the electrons and positrons in the material during particle transport must also be folded into the event rate calculation. This modifies the number flux leaving the beam dump as dN_adE_a = N_A X_0/A (ħ c)^2 ∫d^2Φ_e^+/dE_e dΩ_e I(t, E_+, E^') ×Θ_detd^2σ(E^')dE^' dΩ^' dΩ_e dΩ^' dE_+ dt dE^' where N_A is Avogadro's number, X_0 is the radiation length of the electrons/positrons in the dump material, and A is the atomic weight. I(t, E_i, E_f) = θ(E_i - E_f)/E_i Γ (4 t/3) (ln E_i/E_f)^4t/3 - 1 is the energy loss smearing function for the electron/positron radiation length t integrated up to target radiation thickness T <cit.>. We integrate over the solid angle of the positron with respect to the beamline, Ω_e, and outgoing ALP solid angle with respect to the positron direction, Ω^', taking care to integrate only those ALPs pointed in the direction of the detector solid angle through the Heaviside function Θ_det <cit.>. § DATA ANALYSIS §.§ MiniBooNE Dump Mode The final states of concern in our search for ALPs in the MiniBooNE detector are photon-like events and electron-like events, listed in Table <ref>. We have adopted the same selection cuts made in the ν-e analysis of the MiniBooNE dump mode data for these states. Here we study the detector response with true simulated information to analyze the efficiency of the electron-like event selection from reconstructed events inside the detector. For the analysis of the Monte Carlo generated data, after the preliminary cuts have been applied, the first round of the reconstructed events is fit under the one-track electron and muon hypothesis. Each fit returns the likelihood of the corresponding hypothesis: ℒ_e and ℒ_μ. Those events satisfying the log(ℒ_e/ℒ_μ) > -0.05 continue the next round of reconstruction. In the second round, reconstructed events are fit under the general two-photon hypothesis. Similarly, the events should satisfy log(ℒ_π^0/ℒ_e) < 0. The efficiencies of these two cuts using simulated data as functions of electron visible energy and electron scattering angle are shown in Fig. <ref>. The selection efficiencies as a function of the visible energy, E^vis_e, are fitted as an arctangent function (p_0arctan(p_1 x) + p_2). The selection efficiencies as a function of the cosine of the angle with respect to the beam axis, cosθ_e, are fitted as a straight line (p_0+p_1x) except for the forward region of log(ℒ_e/ℒ_μ) which has a second-order polynomial fit (p_0+p_1x+p_2x^2). Uncertainties from the goodness-of-fit on the efficiency curve as a function of E_e^vis and cosθ_e are constrained to be less than 20%, so their impact on the exclusions over the model parameter space shown in the following section will not be qualitatively different. In addition to these log-likelihood efficiencies, we also take into account the cut on the reconstructed vertex radius of 500 cm, which effectively reduces the MiniBooNE volume to a sphere of 10 m in diameter. Other cuts, such as the number of tank and veto hits, and the Scintillation / Cherenkov ratios we assume to have perfect signal efficiency for the detection channels in Table <ref>. However, we do check that the γγ, e^+ e^-, and γ e^- final states from axion interactions and decays are collinear enough to be identified as a single electron-like Cherenkov ring in the detector. This also ensures that the cut on the di-gamma invariant mass m_γγ≤ 80 MeV is passed by selection for our ALP signals. Lastly, we bin the ALP signal Monte Carlo events into visible energy and cosine bins between 75 ≤ E_γ≤ 850 MeV and cosθ≥ 0.9 (taking E_γ = E_e^vis for the electron-like visible energy measurement). Since inverse Primakoff scattering is characterized by a forward outgoing photon, while inverse Compton scattering is characterized by a forward outgoing electron and a soft off-forward photon (typically below the lower energy cut), these scattering channels are well within the selection region for most choices of the couplings and the ALP mass. Example spectra for photon and electron coupling channels are shown in Fig. <ref>, where we have convolved the predicted event rates with the efficiency functions described above. For the case of ALPs undergoing inverse Primakoff scattering in the detector, a Z →γ Z, we integrate over the visible energy and outgoing angle of the final state photon; d^2R/dE_γ dΩ_γ = N_T ∫dN_a/dE_ad^2σ(E_a)/dE_γ dΩ_γϵ(E_γ) ϵ(Ω_γ) dE_a where ϵ(E_γ) and ϵ(Ω_γ) = ϵ(cosθ_γ) are equivalent to the visible energy and cosine efficiencies, respectively, of the electron-like signals shown in Fig. <ref>. Here, recall the differential event rate dN_a / dE_a passing into the detector from Eq. <ref>. Integrating Eq. <ref> over energy bin edges [75, 100, 150, 200, 250, 300, 500, 850] (in MeV) and cosine bin edges [0.9, 0.95, 0.99, 1.0] yields the ALP signal s_i in each bin i as a function of the mass and couplings. In the case of decays, instead of the differential cross section in Eq. <ref> we use the probability of decays occurring inside the detector P_decay= e^-ℓ/(τ v_a)[ 1 - e^-Δℓ /(τ v_a) ] where τ v_a is the ALP decay length in the lab frame, ℓ is the baseline distance between the ALP production in the dump, and Δℓ is the fiducial path length in the detector during which the decay must take place. For the other detection channel final states (2γ, 1γ1e^-, or e^+e^-), both final state particles leave visible energy in the detector, so we need to ensure that they are collinear enough to be reconstructed as a single Cherenkov ring in the detector. We check the angular distribution of the final state and cut events if two final state particles are separated by more than 5 degrees. We use a binned log-Poisson likelihood to obtain the confidence limits; ln L(θ⃗) = ∑_i=1^7 d_i ln[s_i(θ⃗) + b_i] - [s_i(θ⃗) + b_i] - ln[Γ(d_i + 1)] for data d_i, backgrounds b_i, and signal s_i(θ⃗), where θ⃗ = (m_a, g_aγ) in the case of dominant ALP-photon coupling and θ⃗ = (m_a, g_ae). The CLs are then given by finding regions of constant delta-log-likelihood, -2Δln L ≡ 2(ln L(θ) - ln L(θ)_min), in the relevant model parameter space θ⃗. §.§ ArgoNeuT ArgoNeuT <cit.> collected data from 1.25 × 10^20 POT impinging on the NuMI target, with its LArTPC detector situated 1.04 km downstream of the target while the beamline was in anti-neutrino mode <cit.>. With a fiducial volume of 0.40×0.47×0.90 cm^3, the angular acceptance of the detector coverage corresponds to roughly 0.325 mrad in solid angle. We perform a similar simulation with GEANT4 using the physics list to model the particle cascades inside the NuMI beam target environment (120 GeV protons on graphite). The ALP flux is calculated in the same way explained in the case of the MiniBooNE dump. From the GEANT4 flux distributions of e^± and γ in the solid angle of ArgoNeuT, shown in Fig. <ref>, we estimate the ALP flux produced from 1.25× 10^20 POT during data collection. A dedicated search for heavy ALPs decaying to di-muon pairs was performed by the ArgoNeuT collaboration <cit.>, exhibiting an event topology with very low background expectations. However, here we are interested in different types of event topologies: e^+ e^-, e^- γ, 2γ and 1γ (see Table <ref>), for which a dedicated analysis is missing. Therefore, we will not perform a likelihood analysis. We will just provide the contours in the parameter space for which the following number of signal ALP events would be observed in ArgoNeuT: 3, 20, and 100. These numbers are equal to the Poisson error of ∼ 10, 400, and 10^4 background events, respectively. § RESULTS The constraints on the ALP-photon coupling g_aγ as a function of the ALP mass m_a derived from MiniBooNE beam dump mode data is shown in Fig. <ref>. The 1σ and 2σ CLs are shown individually using the delta-log-likelihood method, and we find that the MiniBooNE data sets new laboratory limits on the ALP coupling for masses below 100 keV or so, where previously astrophysics (HB star cooling and SN1987a <cit.>, see also refs. <cit.>) had placed the only constraints ahead of beam dump constraints <cit.>[The measurement of the explosion energy of SN1987A can have tension to the cosmological triangle region unless the star cooling process is significantly different from the standard picture <cit.>.] and recently, constraints set by the CCM120 engineering run <cit.>. Limits set by the ArgoNeuT null result from 1.25× 10^20 POT of collected data are shown in blue, benchmarking the signal event rate at 3, 20, and 100 events in the absence of a dedicated analysis with backgrounds and proper event selection. Comparing the shape of the exclusion contours between MiniBooNE and ArgoNeuT, one can see the impact of the longer baseline between beam target and detector at ArgoNeuT (∼ 1 km) versus MiniBooNE (489 m) shifting the sensitivity contour to larger masses reflecting longer ALP lifetimes for a →γγ decay. In this space, we also show the parameter space associated with QCD Axion model benchmarks spanned between the dashed black lines. Here the range of couplings and masses are shown for Kim-Shifman-Vainshtein-Zakharov (KSVZ) benchmark models <cit.>, where the range is defined by taking the anomaly number ratios of E/N = 44/3 to E/N = 2 in the model. The correlations between the QCD axion mass and its effective couplings are taken from ref. <cit.> (see also Appendix <ref>). While the constraints shown here are purely on the photon-ALP couplings, independent constraints on the ALP-gluon couplings in these model variants are stringent and would indirectly rule out much of the parameter space <cit.>. These bands are of course only representative of these traditional QCD models shown for a sense of scale. QCD axions that are invoked to solve the strong CP problem which have parametrically heavier or lighter masses in other non-traditional models are also possible <cit.>. We set limits in the same way on the electron-ALP coupling g_ae as a function of the ALP mass in Fig. <ref>. The parameter space associated with Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) benchmark models <cit.>, for which couplings to electrons would be dominant relative to the photon couplings, the span between the dashed black lines. Again, we show this span of model parameter space for reference although the constraints shown here from pure g_ae-driven channels are conservative and indirect constraints on the DFSZ gluon couplings would be more stringent. In the electron coupling, we find that MiniBooNE dump mode tests parameter space already ruled out by existing laboratory searches (e.g. NA64, E137, and other beam dumps). Although, in the mass range ∼ 10 MeV the resonant channel e^+ e^- → a produces a highly peaked signal which becomes visible inside the energy region of interest, 75 < E_vis < 850 MeV (see Fig. <ref>). This is because the resonant energy tracks the square of the ALP mass, as E_a = m_a^2 / (2m_e), producing the first visible peak within this energy range for m_a ≃ 10 MeV. The MiniBooNE dump mode becomes highly sensitive to ALP signals here for those masses but is consistent with the existing E137 constraints in this region. The subtle undulating features in the CL contours from m_a = 10 - 30 MeV then reflect the signal rising and falling to accommodate the two data points in the 3rd and 6th energy bins in Fig. <ref>. ArgoNeuT sensitivity to this coupling is fairly powerful in the m_a > 2m_e mass range and would exclude new parameter space ahead of the limits set by the CCM120 engineering run between m_a = 1 MeV and m_a = 5 MeV. This is owed in part to the energy scale and long distance from the detector to the target being ideal to probe long ALP lifetimes, and also the relatively larger e^± fluxes produced in the NuMI target (Fig. <ref>). This exclusion would be possible even for a benchmark signal rate of 100 events, corresponding roughly to a Poisson background of 10^4 events without taking into account signal efficiency. This sensitivity is lost in the scattering limit for m_a < 2 m_e where NA64 missing energy and CCM120, where being at much closer proximity to the production site, ℓ∼ 20 m plays a bigger role, set the leading constraints. § OUTLOOK The analysis of the MiniBooNE dump mode data shows significant sensitivity to dark sector states produced by the secondary electromagnetic cascades in the BNB dump environment. By utilizing the off-target configuration and examining the interactions of 1.86 × 10^20 protons with the steel beam dump, we have expanded the existing constraints on ALPs in the 10-100 MeV mass regime that couple to photons. Simultaneously, despite a small exposure and fiducial detector mass, the null observations of ArgoNeuT could potentially rule out parameter space for ALPs in the same mass range coupling to electrons, due to the higher beam energy. Stopped-pion experiments at ∼GeV scale proton beam dumps also have the capability to probe new physics in the secondary electromagnetic showers, expanding in complementary regions of model parameter space to the higher energy, longer baseline beam dump experiments situated at the NuMI, BNB, or LBNF beams. Future beam dump searches may be possible to fully probe QCD axion parameter space for MeV masses, such as a proposed dump mode or target-less running mode for DUNE <cit.>. A dedicated target-less mode was shown to test electron-ALP couplings down to g_ae∼ 10^-6 for m_a < 2 m_e and down to g_ae∼10^-9 from ALP decays to e^+ e^- pairs with a limited 3 month to 1-year exposure. § ACKNOWLEDGMENTS We are grateful to Ornella Palamara for the helpful discussions regarding the potential for dedicated ALP studies at ArgoNeuT. The work of IMS is supported by DOE under the award number DE-SC0020250. The work of BD and AT is supported by the DOE Grant No. DE-SC0010813. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High-Performance Research Computing. The work of GG, WJ, and JY is supported by the U.S. Department of Energy under Grant No. DE-SC0011686. We thank the Center for Theoretical Underground Physics and Related Areas (CETUP*) and SURF for facilitating portions of this research. § QCD AXION MODELS The correlations between the QCD axion mass and its effective couplings are given below, taken from ref. <cit.>. We simply reiterate those correlations here for the convenience of the reader. The relation between the Peccei-Quinn breaking scale f_a and the axion mass is f_a = (5.691× 10^6eV/m_a) GeV To find the correlations between the axion mass and its effective couplings to photons in the Kim-Shifman-Vainshtein-Zakharov (KSVZ) benchmark model <cit.> is then given by Eq. <ref>; g_aγ = m_a/GeV(0.203 E/N - 0.39) We then consider a range of model parameter space by considering anomaly number ratios of E/N = 44/3 to E/N = 2. This defines a band in (m_a, g_aγ) parameter space in which the QCD axion's couplings and mass may reside. For the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) benchmark model <cit.>, for which couplings to electrons would be dominant relative to the photon couplings, we take g_ae = m_e C_ae(m_a, tanβ)f_a where the coefficient C_ae is dependent on the rotation angle β for the vacuum expectation values of the extended Higgs sector in DFSZI and DFSZII models; DFSZ(I): C_ae = -1/3sin^2β + loop factors DFSZ(II): C_ae = 1/3sin^2β + loop factors Here we take tanβ values between 0.25 and 120, which equates to sinβ = 0.242536 and sinβ = 0.999965, respectively <cit.>.
http://arxiv.org/abs/2307.05542v2
20230708193421
Geometric parametrization of $SO(D+1)$ phase space of all dimensional loop quantum gravity: II. Beyond the simplicity constraint surface
[ "Gaoping Long" ]
gr-qc
[ "gr-qc" ]
[ Sungsoo Ray Hong ==================== The regularization of the scalar constraint and the Fermion coupling problem indicate that it is necessary to consider some kind of gauge fixing methods to deal with the simplicity constraint in all dimensional SO(D+1) loop quantum gravity. The coherent state with well-behaved peakedness property is an essential ingredient to carry out the gauge fixing method. To provide the basic tool for constructing such kind of coherent state, we generalize the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space of (1+D)-dimensional loop quantum gravity from the edge simplicity constraint surface to a dense subspace in the SO(D+1) holonomy-flux phase space. The symplectic structure on the twisted geometric parameter space and the Poisson structure in terms of the twisted geometric variables are analyzed. Besides, we discuss the relation between the two twisted geometry parametrizations constructed respectively on the edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Our result show that these two type of parametrizations are equivalent to each other by carrying out the gauge reduction with respect to the edge simplicity constraint. § INTRODUCTION As a non-perturbative and background-independent approach to unify general relativity (GR) and quantum mechanics, loop quantum gravity (LQG) has made remarkable progresses in several aspects <cit.><cit.><cit.><cit.>. For instance, various symmetry-reduced models are established in the framework of LQG to give the resolution of singularities <cit.>, and various attempts are made in the framework of LQG to account for the BH entropy <cit.>. Loop quantum gravity in all dimensional spacetime is also concerned since its potential for absorbing the valuable ideas (e.g. super symmetries and extra dimensions <cit.>) in other gravity theories to the loop quantization framework of GR. The loop quantization approach for GR in all dimensions is first developed by Bodendorfer, Thiemann and Thurn <cit.><cit.><cit.>. In detail, the all dimensional LQG is based on the connection formulation of (1+D) dimensional GR in the form of the SO(D+1) Yang-Mills theory, with the kinematic phase space coordinatized by the canonical pairs (A_aIJ,π^bKL), consisting of the spatial SO(D+1) connection fields A_aIJ and the vector fields π^bKL. In this formulation, the theory is governed by the first class system of the SO(D+1) Gaussian constraints, the (D+1)-dimensional ADM constraints and the additional simplicity constraints. Similar to the Gaussian constraints, the simplicity constraints taking the form S^ab_IJKL:=π^a[IJπ^|b|KL] generate extra gauge symmetries in the SO(D+1) Yang-Mills phase space. It has been shown that the connection phase space correctly reduces to the familiar ADM phase space by carrying out the symplectic reductions with respected to the Gaussian and simplicity constraints. Similar to the case of the SU(2) LQG, the loop quantization of the SO(D+1) Yang-Mills theory leads to the spin-network states of the SO(D+1) holonomies on some graphes, which carry the quanta of the flux operators representing the fluxes of π^bKL over some (D-1)-dimensional faces. The Hilbert space composed by the spin-network states indicates the holonomy-flux phase space associated to each graph, with the Poisson algebras among holonomies and fluxes in the holonomy-flux phase space being isomorphic to the quantum algebras among them in the quantum Hilbert space. To look for the all-dimensional Regge ADM data encoded in the SO(D+1) spin-network states, it is necessary to find the degrees of freedom of discrete geometries encoded in the SO(D+1) holonomy-flux variables, by considering a gauge reduction procedure with respect to both of the SO(D+1) Gaussian constraints and the simplicity constraints in the holonomy-flux phase space. A series of studies in this direction is first carried out in the SU(2) formulation of (1+3)-dimensional LQG <cit.><cit.><cit.><cit.><cit.>, and then they are generalized to the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. Specifically, since the simplicity constraints become anomalous at the vertices of the graphs, the reductions with respect to the Gaussian and simplicity constraints are guided by the twisted geometry parametrization of the edge simplicity constraint surface in the holonomy-flux phase space of SO(D+1) LQG. Especially, the twisted geometry interpretation of holonomy-flux variables suggests that the Gaussian and edge simplicity constraints should be imposed strongly since they generate true gauge transformations, while the vertex simplicity constraints should be imposed weakly. The reduced space parametrized by the twisted geometric parameters give a discrete Regge geometry picture, which can be regarded as the discrete version of the ADM phase space of GR. An important application of the twisted geometry parametrization is the construction of the twisted geometry coherent state. Such kind of coherent states is firstly established in SU(2) LQG <cit.>, and then it is generalized to the SO(D+1) LQG with the restriction of the simple representations <cit.>. Specifically, based on the twisted geometry parameters, the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraints is established by selecting the dominant terms (which is referred to as Perelomov type coherent state <cit.>) with simple representation of SO(D+1) in the decomposition of the heat-kernel coherent state of SO(D+1) <cit.>. It has been shown that the simple twisted geometry coherent states take the Gaussian superposition formulations. Especially, the simple twisted geometry coherent states provides an over-complete basis of the strong solution space of quantum edge simplicity constraints, and their wave functions have well-behaved peakedness and Ehrenfest properties in the reduced phase space with respect to the edge simplicity constraints <cit.>. In fact, the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space discussed in Ref.<cit.> concerns the issues on the constraint surface of edge simplicity constraint, and the resulted twisted geometry variables only give the parametrization of the reduce phase space with respect to edge simplicity constraint. Correspondingly, the simple twisted geometry coherent states constructed based on the twisted geometry parametrization of the reduce phase space are the gauge (with respect to edge simplicity constraint) invariant coherent states <cit.>. In other words, the wave functions of these gauge (with respect to edge simplicity constraint) invariant coherent states are constants along the corresponding gauge orbits, so that each of them peaks at a gauge orbit instead of a point in the phase space <cit.>. As we have mentioned above, the edge simplicity constraint should be imposed strongly following the twisted geometry interpretation of holonomy-flux variables. Thus, it seems that all of the studies for all dimensional SO(D+1) LQG can be completed in the strong solution space of quantum edge simplicity constraint, which is the gauge (with respect to simplicity constraint) invariant subspace of the full Hilbert space of all dimensional SO(D+1) LQG. Nevertheless, several discussions has shown that it is necessary to consider some kind of gauge fixed solution space with respect to simplicity constraint, to deal with some of the issues appeared in the all dimensional SO(D+1) LQG. Let us introduce two issues to explain this necessity. First, the regularization of the scalar constraint can be carried out by following the standard loop regularization method <cit.><cit.><cit.>. The resulted regularized scalar constraint contains the Euclidean term which is given by the antisymmetric contraction of the holonomies along some closed loops and the fluxes at the beginning and target point of these loops. Classically, this Euclidean term captures the information of both of the intrinsic and extrinsic curvature along these closed loops. However, it is shown that the Euclidean term in the quantized scalar constraint can not capture the information of those intrinsic and extrinsic curvature in the strong solution space of quantum edge simplicity constraint, since the strong imposition of quantum edge simplicity constraint leads to the gauge averaging, which vanishes some critical ingredients in the holonomies <cit.>. Thus, the standard loop regularization method is conflict to the strong imposition of the edge simplicity constraint. To deal with issue, one may consider the gauge fixed solution of the edge simplicity constraint to avoid the gauge averaging, so that the scalar constraint operator given by standard loop regularization method captures the information of those intrinsic and extrinsic curvature correctly. This is the first issue which points out the necessity to consider then gauge fixed solution space with respect to simplicity constraint. The second issue which points out this necessity is the the Fermion coupling problem in all dimensional LQG <cit.>. Specifically, the strong imposition of the quantum edge simplicity constraint restricts that the holonomies in all dimensional LQG can only be represented in the simple representation space of SO(D+1), which leads that the holonomies can not transform the Fermions which take values in the spinor representation space of SO(D+1) for D≥4. An alternative scheme to deal with this issue is to consider the gauge fixed solution of quantum edge simplicity constraint based on the coherent states, which ensures that the holonomies could take matrixes in the spinor representation space of SO(D+1), so that they are able to describe the transformation of Fermions along edges. Usually, in the classical theory, the gauge fixing can be realized by restricting the physical considerations on a section of the gauge orbits on the constraint surface of edge simplicity constraint. However, this is not valid in the quantum theory, since the wave functions of the quantum states which sharply converge to the constraint surface of edge simplicity constraint are always dispersed along the gauge orbits. To overcome this problem, it is reasonable to consider the coherent state whose wave function peaks at a point in the phase space, so that one could have the state whose wave function converges to both of the constraint surface of edge simplicity constraint and a section of the gauge orbits, with this convergence is controlled by the width of the wave function of the coherent state. Such kind of coherent state whose wave function peaks at a point in the SO(D+1) holonomy-flux phase space could be constructed by following a similar procedure as the construction of the simple twisted geometry coherent state in the strong solution space of quantum edge simplicity constraint <cit.>. More specifically, one need to consider a more generalized twisted geometry parametrization, which is able to coordinate the (almost) whole SO(D+1) holonomy-flux phase space instead of the reduced phase space. Then, based on this more generalized twisted geometry parametrization, one could decompose the heat-kernel coherent state of SO(D+1) and select some dominant terms to formulate the twisted geometry coherent state involving the non-simple representations of SO(D+1), which will be referred as to the non-simple twisted geometry coherent state in all dimensional LQG. As the first step to establish the non-simple twisted geometry coherent state in all dimensional LQG, it is necessary to extend the twisted geometry parametrization to the full SO(D+1) holonomy-flux phase space. In this article, we will establish the twisted geometry parametrization of a dense subspace of the full SO(D+1) holonomy-flux phase space, and extend this parametrization as a symplectic-morphism. Besides, we will show that the twisted geometry parametrization of edge simplicity constraint surface introduced in our previous work <cit.> can be regarded as a special cases of the construction in this article. This article is organized as follows. In our brief review of the classical connection formulation of all dimensional GR in Section <ref>, we will also introduce the SO(D+1) holonomy-flux phase space and the discretized formulation of the kinematical constraints. In Section <ref> and Section <ref> we will introduce the twisted geometry parametrization for a dense subspace of the SO(D+1) phase space, and analyze the Poisson structures among the new geometric parametrization variables. Then, in Section <ref> we will discuss the relation between the twisted geometry parametrizations of the edge simplicity constraint surface and the dense subspace of the SO(D+1) holonomy-flux phase space. Finally, we will conclude with the outlook for the possible next steps of the future research. § PHASE SPACE OF ALL DIMENSIONAL LOOP QUANTUM GRAVITY §.§ Connection phase space The classical connection formulation of GR with arbitrary spacetime dimensionality of (1+D) is first developed by Bodendofer, Thiemann and Thurn in Ref.<cit.>. This continuum connection phase space is coordinatized by a so(D+1) valued 1-form field A_aIJ and a vector field π^bKL on the D-dimensional spatial manifold Σ, with the non-trivial Poisson brackets between them being given by {A_aIJ(x), π^bKL(y)}=2κβδ_a^bδ_[I^Kδ_J]^Lδ^(D)(x-y), where β is the Barbero-Immirzi parameter and κ is the gravitational constant. It is known that this connection phase space correctly reduces to the familiar ADM phase space after the standard symplectic reduction procedure with respect to the first-class constraint system composed by the Gauss constraints 𝒢^IJ≈0 and simplicity constraints S^ab_IJKL:=π^a[IJπ^|b|KL]≈0. Specifically, the simplicity constraint can be solved as π^aIJ=2√(q)n^[Ie^|a|J], where e^a_I is a dual D-bein field, n^I satisfying n^In_I=1 is determined by e^a_I with n^Ie_aI=0, and q is the determinant of the spatial metric q_ab which is determined by π^aIJ with q^ab=e^aIe^b_I on the simplicity constraint surface. One can split A_aIJ as A_aIJ≡Γ_aIJ(π)+β K_aIJ where Γ_aIJ(π) is a functional of π^aIJ and it satisfies Γ_aIJ(π)=Γ_aIJ(e) on the simplicity constraint surface, with Γ_aIJ(e) being the unique torsionless spin connection compatible with the D-bein e_aI. Then, the densitized extrinsic curvature can be given by K̃_a^ b=K_aIJπ^bIJ on the constraint surface of both Gaussian and simplicity constraint surface. It is easy to check that the Gaussian constraint generate the standard SO(D+1) gauge transformation of the connection field and its conjugate momentum. Now, let us consider the simplicity constraints from the perspectives of the corresponding gauge transformations. First, the solutions π^aIJ=2√(q)n^[Ie^|a|J] to the simplicity constraint introduced above defines the constraint surface of the simplicity constraints. Then, one can verify that the infinitesimal gauge transformations induced by simplicity constraints are given by <cit.> δ K_c^PQ={∫_Σd^Dxf_ab^IJKLπ^a_[IJπ^b_KL](x), K_c^PQ(y)}=4κ f_cb^[PQKL]π^b_KL(y). Notice that, on the simplicity constraint surface we have π^aIJ=2√(q)n^[Ie^|a|J] so that δ K_c^IJn_I=0. Further, by introducing the decomposition K_aIJ≡ 2n_[IK_|a|J]+K̅_aIJ, where K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL with η̅^I_J=δ^I_J-n^I n_J and K̅_aIJn^I=0, we immediately find that K̅_aIJ is the pure gauge component, while the components 2n_[IK_|a|J] are gauge invariant with respect to the transformations given in (<ref>). From the expressions of the ADM variables qq^ab=1/2π^aIJπ^b_IJ and K̃_a^ b=K_aIJπ^bIJ, it is easy to see that these variables are indeed gauge invariant with respect to the simplicity constraints on the constraint surface. Thus, through the symplectic gauge reduction procedure, the simplicity constraints eliminate the two parts of degrees of freedom— restricting π̅^aIJ:=π^aIJ-2√(q)n^[Ie^|a|J]=0 by the constraint equation and removing the pure-gauge components K̅_aIJ:=η̅_I^Kη̅_J^LK_aKL. Following these results, the geometric variables constructed by the ADM variables (q_ab,K̃^cd) can be extended as functionals in the connection phase space, with their original geometric interpretation are remained on the constraints surface. §.§ Holonomy-flux phase space The quantization of the connection formulation of (1+D)-dimensional GR can be carried out by following the standard loop quantization procedures, which leads to a Hilbert space ℋ given by the completion of the space of cylindrical functions on the quantum configuration space <cit.>. This Hilbert space ℋ can be regarded as a union of the spaces ℋ_γ=L^2((SO(D+1))^|E(γ)|,dμ_Haar^|E(γ)|) on all possible graphs γ, where E(γ) denotes the set of edges of γ and dμ_Haar^|E(γ)| denotes the product of the Haar measure on SO(D+1). The Gaussian constraint and simplicity constraint can be promoted as constraint operators in this Hilbert space. However, it has been turned out that the quantum brackets among these constraints give an open and anomalous quantum algebra, which is distinguished with the corresponding constraint algebra of first class in connection phase space <cit.>. Hence, it is necessary to propose a proper treatment of these quantum constraints, to reduce the gauge degrees of freedom and remain the physical degrees of freedom correctly. A reasonable method to reach this goal is to construct the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. More specifically, since the classical constraint algebras in the holonomy-flux phase space are isomorphic to the quantum constraint algebras in the quantum theory, one can treat the Gaussian and simplicity constraints in the holonomy-flux phase space and quantum theory on the same footing. Then, the degrees of freedom reduced in the procedures of the imposition of quantum constraint operators can be reflected in the procedures of the gauge reductions with respect to Gaussian and simplicity constraints in the holonomy-flux phase space. Through this gauge reductions, one can clarify the gauge degrees of freedom and verify that if the treatment of these constraints remains correct physical degrees of freedom. Now, let us first give a brief review of the holonomy-flux phase space. The quantum geometry of loop quantum gravity is described based on the spatially smeared variables — the D-bein fluxes over (D-1)-dimensional faces and connection holonomies over paths— for the conjugate pairs of elementary variables. We will focus on the holonomies and fluxes based on one specific graph for the following. The edges of the given graph naturally provide the set of paths for a fixed set of holonomies, and the cell decomposition dual to the graph provides the set of (D-1)-faces specifying a fixed set of fluxes. In this setting, the holonomy over one of the edges is naturally conjugating to the flux over the (D-1)-face traversed by the edge, with this pair satisfies the smeared version of the Poisson algebra (<ref>), and thus form a new phase space. More precisely, given the graph γ embedded in the spatial manifold, we consider a new algebra given by the holonomy-flux variables (h_e, X_e)∈ SO(D+1)× so(D+1) over all edges e of γ. These pairs of variables represent the discretized version of the connection A_aIJ and its conjugate momentum π^bKL. Specifically, the holonomy of A_aIJ along an edge e∈γ defined by h_e[A]:=𝒫exp(∫_eA)=1+∑_n=1^∞∫_0^1dt_n∫_0^t_ndt_n-1...∫_0^t_2 dt_1A(t_1)...A(t_n), where A(t):=1/2ė^aA_aIJτ^IJ, ė^a is the tangent vector field of e, τ^IJ is a basis of so(D+1) given by (τ^IJ)^def._KL=2δ^[I_Kδ^J]_L in definition representation space of SO(D+1), and 𝒫 denotes the path-ordered product. The flux X^IJ_e of π^aIJ through the (D-1)-dimensional face dual to edge e is defined by X^IJ_e:=-1/4β a^D-1tr(τ^IJ∫_e^⋆ϵ_aa_1...a_D-1h(ρ^s_e(σ)) π^aKL(σ)τ_KLh(ρ^s_e(σ)^-1)), where a is an arbitrary but fixed constant with the dimension of length, e^⋆ is the (D-1)-face traversed by e in the dual lattice of γ, ρ_e^s(σ): [0,1]→Σ is a path connecting the source point s_e∈ e to σ∈ e^⋆ such that ρ_e^s(σ): [0,1/2]→ e and ρ_e^s(σ): [1/2, 1]→ e^⋆. The Poisson algebra between the holonomy-flux variables can be induced from the Poisson bracket (<ref>) between the connection variables, which reads {h_e, h_e'}=0, {h_e, X^IJ_e'}=δ_e,e'κ/a^D-1d/dλ(e^λτ^IJh_e)|_λ=0, {X^IJ_e, X^KL_e'}=δ_e,e'κ/2a^D-1(-δ^IKX_e^JL-δ^JL X^IK_e+δ^ILX_e^JK+δ^JKX_e^ IL). Notice that h_e∈ SO(D+1), X_e^IJ∈ so(D+1) and SO(D+1)× so(D+1)≅ T^∗ SO(D+1), the new discrete phase space called the holonomy-flux phase space of SO(D+1) loop quantum gravity on a fixed graph, is a direct product of SO(D+1) cotangent bundles. Finally, the complete phase space of the theory is given by taking the union over the holonomy-flux phase spaces of all possible graphs. Similar to the SU(2) case, the phase space coordinated by the holonomy-flux variables (h_e, X_e) of SO(D+1) loop quantum gravity can be regarded as the discretized version of the continuum phase space. The (discretized) Gaussian and simplicity constraints in the holonomy-flux phase space are constructed in agreement with the corresponding quantum constraints. With X_-e=-h_e^-1X_eh_e≡X̃_e, the (discretized) Gaussian constraints G_v^IJ≈0 for each vertex v∈γ of the graph take the form <cit.> G_v^IJ=∑_e|s(e)=vX_e^IJ+∑_e|t(e)=vX̃_e^IJ≈0, where s(e) and t(e) denote the source and target points of the oriented edge e respectively. The (discretized) simplicity constraints consist of the edge simplicity constraints S^IJKL_e≈0 and vertex simplicity constraints S^IJKL_v,e,e'≈0, which take the forms <cit.> S_e^IJKL≡ X^[IJ_e X^KL]_e≈0, ∀ e∈γ, S_v,e,e'^IJKL≡ X^[IJ_e X^KL]_e'≈0, ∀ e,e'∈γ, s(e)=s(e')=v. It has been shown that, since the commutative Poisson algebra between the conjugate momentum variables {π^bKL} becomes non-commutative Poisson algebra between the flux variables { X^KL_e} after the smearing, the Poisson algebra among the discrete version of simplicity constraints become non-closed and thus anomalous, which leads that the symplectic reductions in the holonomy-flux phase space becomes difficult to implement <cit.>. To deal with this issue, the twisted geometry parametrization of the holonomy-flux phase space is constructed, which ensures that the gauge reductions with respect to the Gaussian and simplicity constraint in the holonomy-flux phase space can be carried out with the guidance of the twisted geometric interpretation of the holonomy-flux variables <cit.>. The twisted geometry parametrization for the the SU(2) holonomy-flux variables of (1+3)-dimensional LQG is first introduced by a series of studies following the original works by Freidel and Speziale <cit.><cit.>. The space of the twisted geometry for SU(2) LQG can undergo a symplectic reduction with respect to the discretized Gauss constraints, giving rise to a reduced phase space containing the discretized ADM data of a polyhedral Regge hypersurface. Following a similar procedure, the twisted geometry parametrization in all dimensional SO(D+1) LQG has been constructed on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space in our companion paper <cit.>. It has been shown that the gauge reductions with respect to the simplicity constraints and Gaussian constraints in SO(D+1) LQG can be carried out properly in the twisted geometry parametrization space, which leads to a clear correspondence between the original holonomy-flux variables (h_e, X_e) on edge simplicity constraint surface and the D-hypersurface discrete geometry data in Regge geometry formulation. Nevertheless, it is not enough to construct the twisted geometric parametrization on the edge simplicity constraint surface in the SO(D+1) holonomy-flux phase space. As we have mentioned in introduction, several explorations in the quantum theory of SO(D+1) LQG requires us consider the quantum states whose wave functions are dispersed beyond the edge simplicity constraint surface. Hence, it is necessary to extend the twisted geometry parametrization to interpret the phase space points which are not located in the edge simplicity constraint surface. § GEOMETRIC PARAMETRIZATION OF SO(D+1) HOLONOMY-FLUX PHASE SPACE To ensure our statements and the notations clearer, we will first generalize the twisted geometry parametrization to a dense subspace of T^∗ SO(D+1) in this section. Then, it will be left to section 5 to discuss the relation between the twisted geometry parametrizations constructed in this article and previous works <cit.>. §.§ Beyond the edge-simplicity constraint surface Recall the SO(D+1) holonomy-flux phase space ×_e∈γT^∗ SO(D+1)_e associated to the given graph γ. Let us focus on the holonomy-flux phase space T^∗ SO(D+1) associated to a single edge without loss of generality. Notice that the semi-simple elements in so(D+1) compose a dense subset so(D+1)_ss⊂ so(D+1) and we have T^∗ SO(D+1)≅ SO(D+1)× so(D+1). Then, we can define a dense subspace of T^∗ SO(D+1) as T_ss^∗ SO(D+1):={(h, X)| h∈ SO(D+1), X is a semi-simple element of so(D+1)}. To give the explicit formulation of the twisted geometric parametrization of T_ss^∗ SO(D+1), let us first introduce some new notations. Consider the orthonormal basis {δ_1^I,δ_2^I,...,δ_D+1^I} of ℝ^D+1, one has the basis {τ_IJ} of so(D+1) given by τ_IJ=(τ_IJ)^KL_def.:=2δ_I^[Kδ_J^L] in the definition representation space of SO(D+1), where (τ_IJ)^KL_def. is the generator of the infinitely small rotation in the 2-dimensional vector space spanned by the two vectors δ_I^K and δ_J^L. Then, let us introduce the maximum commutative sub-Lie algebra of so(D+1) spanned by {τ_1, τ_2,...,τ_m} with m=[D+1/2], where we define τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D,D+1 for D+1 being even, and τ_1:=τ_12, τ_2:= τ_34, ..., τ_m:= τ_D-1,D for D+1 being odd. This maximum commutative sub-Lie algebra of so(D+1) generates the maximum commutative subgroup 𝕋^m:=×_=1^m SO(2)_, m=[D+1/2]. Then, SO(D+1) can be regarded as a fiber Bundle with the fibers 𝕋^m on the base manifold ℚ_m:=SO(D+1)/𝕋^m, which can be also given by ℚ_m={𝕍:=(V_1,...,V_m)|V_=gτ_ g^-1, ∈{1,...,m}, g∈ SO(D+1)}. One can choose a Hopf section n: ℚ_m↦ SO(D+1), 𝕍↦ n(𝕍) and another Hopf section ñ: ℚ̃_m↦ SO(D+1), 𝕍̃↦ñ(𝕍̃) for the copy ℚ̃_m of ℚ_m, which satisfy V_1=nτ_1n^-1,...,V_m=nτ_mn^-1, and Ṽ_1=-ñτ_1ñ^-1,...,Ṽ_m=-ñτ_mñ^-1 with ℚ_m∋𝕍:=(V_1,...,V_m) and ℚ̃_m∋𝕍̃:=(Ṽ_1,...,Ṽ_m). Observe that the choice for the Hopf sections is clearly non-unique, and from now on our parametrization will be given under one fixed choice of {n_e,ñ_e} for each edge e. Then, in the subspace T_ss^∗ SO(D+1)_e associated to each edge e, the generalized twisted geometry parametrization can be given by the map (𝕍_e,𝕍̃_e,η⃗_e,ξ⃗_e)↦(h_e, X_e)∈ T_ss^∗ SO(D+1)_e: X_e=1/2n_e(η_e^1 τ_1+...+η_e^m τ_m)n_e^-1 h_e=n_ee^ξ_e^1τ_1...e^ξ_e^mτ_mñ_e^-1, where we defined η⃗_e:=(η_e^1,...,η_e^m), η_e^1,η_e^2,...,η_e^m∈ℝ with η_e^1≥η_e^2≥,...,≥η_e^m≥0 and ξ⃗:=(ξ_e^1,...,ξ_e^m) with ξ_e^1,...,ξ_e^m ∈(-π,π]. By defining η_e^1=:χ_e^1+...+χ_e^m, η_e^2 =:χ_e^2+...+χ_e^m, ..., η_e^m-1=:χ_e^m-1+χ_e^m, η_e^m=:χ_e^m with χ_e^1,...,χ_e^m≥ 0, one can replacing η⃗_e by χ⃗_e:=(χ_e^1,...,χ_e^m) in the parametrization (<ref>). The twisted geometry parametrization (<ref>) of T_ss^∗ SO(D+1)_e associated to a single edge can be directly extended to the whole graph γ. Correspondingly, one can introduce the Levi-Civita holonomies {h^Γ_e|e∈γ} determined by the fluxes {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}, which takes the form h^Γ_e≡ n_ee^ζ_e^1τ_1...e^ζ_e^mτ_mñ_e^-1. Note that the variables (ζ_e^1,...,ζ_e^n) are well-defined via the given h^Γ_e and the chosen Hopf sections, thus (ζ_e^1,...,ζ_e^n) are already fixed by the given {X_e∈ so(D+1)_ss|e∈γ} and {X̃_e∈ so(D+1)_ss|e∈γ}. Then, one can factor out h^Γ_e from h_e through the expressions h_e= (e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) h^Γ_e =h^Γ_e(e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1) in the perspectives of the source point and target point of e respectively. The above decomposition with twisted geometry parameters can be adopted to the splitting of the the Ashtekar connection as A_a=Γ_a+β K_a on a given graph. Specifically, one can consider the integral of A_a=Γ_a+β K_a∈ so(D+1) along an infinitesimal edge direction ℓ^a_e, which leads to A_e≡ A_aℓ^a_e, Γ_e≡Γ_aℓ^a_e and K_e≡ K_aℓ^a_e. Clearly, we can establish the following correspondence of h_e= e^A_e and h^Γ_e= e^Γ_e. The remaining factor should account for the K_e. According to the above discussion, the value of K_e may thus be expressed in the perspectives of the source point and target point of e, respectively as (e^(ξ_e^1-ζ_e^1)n_eτ_1n_e^-1...e^(ξ_e^m-ζ_e^m)n_eτ_mn_e^-1) =e^β K_e or (e^(ξ_e^1-ζ_e^1)ñ_eτ_1ñ_e^-1... e^(ξ_e^m-ζ_e^m)ñ_eτ_mñ_e^-1)= e^β K_e . Further, we have K_e =1/βn_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)n_e^-1 or K_e =1/βñ_e((ξ_e^1-ζ_e^1)τ_1+...+(ξ_e^m-ζ_e^m)τ_m)ñ_e^-1 when it is expressed in the perspectives of the source point and target point of e respectively. The set of the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) gives the generalization of twisted geometry parametrization for the SO(D+1) holonomy-flux phase space. Comparing with the twisted geometry parametrization for the edge-simplicity constraint surface in the SO(D+1) holonomy-flux phase space introduced in our companion paper <cit.>, this generalized parametrization scheme covers the dense subset of the SO(D+1) holonomy-flux phase space, which are far beyond the edge-simplicity constraint surface. We will now carry out an analysis of the symplectic structure of the SO(D+1) holonomy-flux phase space based on the variables ((η^e_1,...,η^e_m), (ξ^e_1,...,ξ^e_m),𝕍_e, 𝕍̃_e) , before coming back to provide more support on the relation between the generalized parametrization scheme in this paper and that only for the edge simplicity constraint surface given in our companion paper <cit.>. § SYMPLECTIC ANALYSIS OF SO(D+1) HOLONOMY-FLUX PHASE SPACE Notice that the discussions in this section only depend on each single edge of the graph. To simplify our notations, we will focus on the analysis on a single edge and omit the label e without loss of generality. §.§ Symplectic structure of SO(D+1) holonomy-flux phase space The symplectic structure of SO(D+1) holonomy-flux phase space has been discussed in our companion paper <cit.>, let us give a brief review of the main notations as follows. Recall that the SO(D+1) holonomy-flux phase space associated with each edge of a given graph can be given by the group cotangent space T^*SO(D+1), as a phase space it enjoys the natural symplectic structure of the T^*SO(D+1). To give the explicit formulation of this symplectic structure, let us introduce the function f(h) on SO(D+1)∋ h, and the element p_X∈ so(D +1)^∗ which is a linear function of Y∈ so(D+1) defined by p_X(Y)≡ X^KLY_KL, where X=X^KL∈ so(D+1). A right-invariant vector field X̂ associated to the Lie algebra element X∈ so(D+1), acts on a function f(h) via the right derivative ∇_X^R as ∇_X^Rf(h)≡d/dtf(e^-tXh)|_t=0; under the adjoint transformation X↦ -hXh^-1, we obtain the corresponding left derivative ∇_X^Lf(h)≡d/dtf(he^tX)|_t=0=-∇^R_hXh^-1f(h). One can straightforwardly show that the map from the right invariant vector fields X̂ to the corresponding elements X∈ so(D+1) is given by the algebra-valued, right-invariant 1-form dhh^-1, which reads i_X̂(dhh^-1)=(ℒ_X̂h)h^-1=-X, where i denotes the interior product, and ℒ_Ŷ≡ i_Ŷd+di_Ŷ denotes the Lie derivative. Now, the natural symplectic potential for T^∗ SO(D+1) can be expressed as Θ≡ X^IJ(dhh^-1)_IJ≡Tr(Xdhh^-1). The symplectic 2-form then follows as Ω≡ -dΘ=- dTr(Xdhh^-1)=1/2Tr(dX̃∧ h^-1dh-dX∧ dhh^-1) where we have introduced X̃≡-h^-1Xh. From the symplectic 2-form, the Poisson brackets among the interesting phase space functions f≡ f(h) and p_Y≡ p_Y(X)=Y^IJX_IJ is given by <cit.> {p_Y,p_Z}=p_[Y,Z], {p_Y,f(h)}=∇^R_Yf(h), {f(h),f'(h)}=0. One can see from the brackets (<ref>) that the Poisson action of p_Y(X) generates left derivatives. Similarly, it is easy to check that the action of p̃_Y(X)≡ Y^IJX̃_IJ with X̃=-h^-1Xh generate the right derivative {p̃_Y,f(h)}=∇^L_Yf(h). Moreover, one can check the commutative relation {p_Y,p̃_Z}=0. Finally, it is easy to verify that, by setting 2κ/a^D-1=1, the Poisson brackets (<ref>) given by the natural symplectic potential (<ref>) for T^∗ SO(D+1) are identical with the one (<ref>) induced by the symplectic structure (<ref>) in the SO(D+1) connection phase space <cit.>. In the following part of this article, we will analyze the symplectic structure on T^∗ SO(D+1) based on the symplectic potential Θ without loss of generality. §.§ Symplectomorphism between SO(D+1) holonomy-flux phase space and generalized twisted geometry parameter space From now on, let us focus on the analysis on one single edge e of given graph γ, and we omit the the label e for all of the notations. Denote by B:=ℚ_m×ℚ̃_m × (×_=1^m ℝ^_+)×(× _=1^m S^1_) the collection of the generalized twisted geometric parameters (𝕍,𝕍̃,χ⃗,ξ⃗). It is easy to see that the map (<ref>) is not a one to one mapping. More explicitly, one can decompose B=B_0∪Ḃ with Ḃ:= B|_η_m> 0 and B_0:= B∖Ḃ. Then, one can find that the map (<ref>) is a one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1), while it is a many to one mapping between B_0 and its image B_0^∗⊂ T_ss^∗ SO(D+1). We will first focus on the symplectic structure on B in this subsection, and then go back to consider the many to one mapping between B_0 and its image B_0^∗ in section <ref>. The one to one mapping between Ḃ and its image Ḃ^∗⊂ T_ss^∗ SO(D+1) is also an isomorphism Ḃ→Ḃ^∗⊂ T_ss^∗ SO(D+1). Based on the isomorphism (<ref>), we may use the generalized twisted geometric parameters to express the induced symplectic structure of Ḃ^∗⊂ T_ss^∗ SO(D+1) inherited from the phase space T^*SO(D+1). First, the induced symplectic potential can be expressed as Θ_Ḃ^∗ = Tr(Xdhh^-1)|_Ḃ^∗⊂ T_ss^∗ SO(D+1)⊂ T^∗ SO(D+1) = 1/2∑_'=1^mη_'Tr(nτ_'n^-1 (dnn^-1+n(∑_dξ^τ_)n^-1-ne^∑_ξ^τ_ñ^-1dññ^-1ñe^-∑_ξ^τ_ n^-1)) = 1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1). In the space B, one can extend the potential Θ_Ḃ=Θ_Ḃ^∗ in the limit η_m→0 and define Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1) as the symplectic potential on B. This potential gives the sympletic form Ω_B as Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1) -∑_=1^mdη_∧ (dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)). It is clear that in the η_m=0 region of the above (pre-)symplectic structure is degenerate, as expected due to the degeneracy in the parametrization itself in the η_m= 0 region of T_ss^∗ SO(D+1). We are interested in the Poisson algebras between these twisted-geometry variables using the presymplectic form Ω_B. In order to give the explicit Poisson brackets, in the following section we will study the Hopf sections n(𝕍) and ñ(𝕍̃) in the perspectives of their contributions to the Hamiltonian fields on B defined by Ω_B . §.§ Geometric action on the Hopf section and its decomposition §.§.§ Geometric action on the Hopf section The Hopf map is defined as a special projection map π: SO(D+1)↦ℚ_m with ℚ_m:=SO(D+1)/𝕋^m, such that every element in ℚ_m comes from an orbit generated by the maximal subgroup 𝕋^m of SO(D+1) that fixed all of the elements in the set {τ_1,τ_2,...,τ_m}. In the definition representation of SO(D+1) the Hopf map reads π: SO(D+1) → ℚ_m g → 𝕍(g)=(gτ_1g^-1, gτ_2g^-1,...). Note that 𝕍(g) is invariant under g↦ g^α_1,α_2,...,α_m=ge^α_1τ_1+α_2τ_2+...α_mτ_m, thus it is a function of D(D+1)/2-[D+1/2] variables only. This result shows that SO(D+1) can be seen as a bundle (which is referred to as Hopf bundle) over ℚ_m with the 𝕋^m fibers. On this bundle we can introduce the Hopf sections, each as an inverse map to the above projection n: ℚ_m → SO(D+1) 𝕍 ↦ n(𝕍), such that π(n(𝕍))=𝕍. This section assigns a specific SO(D+1) element n to each member of the ℚ_m, and it is easy to see that any given section n is related to all other sections via n^α_1,α_2,...,α_m≡ ne^α_1τ_1+α_2τ_2+...α_mτ_m; hence the free angles {α_1,α_2,...,α_m} parametrize the set of all possible Hopf sections. Notice that each algebra element X∈ so(D+1) can be associated to a vector field X̂ on ℚ_m, which acts on a function f(𝕍) of ℚ_m as ℒ_X̂f(𝕍):=d/dtf(e^-tX𝕍e^tX)|_t=0, where g𝕍g^-1:=(gV_1g^-1, gV_2g^-1,...,gV_mg^-1) with g∈ SO(D+1). Similarly, for a so(D+1) valued function S=S(𝕍) on ℚ_m, it can be also associated to a vector field Ŝ on ℚ_m, , which acts on the function f(𝕍) of ℚ_m as ℒ_Ŝf(𝕍):=d/dtf(e^-tS𝕍e^tS)|_t=0. Specifically, for the linear functions we have ℒ_X̂𝕍:=(ℒ_X̂V_1,..., ℒ_X̂V_m)=(-[X,V_1],...,-[X,V_m])=:-[X,𝕍]. Especially, we are interested in the action of the vector fields on the Hopf section n. Notice that we have ℒ_X̂V_(n)=(ℒ_X̂n)τ_ n^-1 +nτ_(ℒ_X̂n^-1)=[(ℒ_X̂n)n^-1, V_], ∀∈{1,...,m}. Comparing this result with (<ref>), we deduce that (ℒ_X̂n)n^-1=-X+∑_V_ F^_X(𝕍), where F^_X(𝕍) are functions on ℚ_m, so that V_ F^_X(𝕍) commuting with the element 𝕍 for all . Lemma. The solution functions L_^IJ≡ L^: ℚ_m↦ so(D+1) of the equations Tr(L^ dnn^-1)=0, L_^IJV_',IJ=δ_,', appears in the Lie derivative of the Hopf map section n(𝕍) as, L^_X:=L^IJ_ X_IJ=F^_X and it satisfies the key coherence identity ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y]. Finally, the general solution to this identity satisfying the conditions L_^IJV_',IJ=δ_,' is given by L'^_X=L^_X+ℒ_X̂α^ where α^ is a function on ℚ_m. Proof. To prove Eq.(<ref>), let us take the interior product of an arbitrary vector field X̂ with the definition Tr(L^ dnn^-1)=0 and consider (ℒ_X̂n)n^-1=i_X̂(dnn^-1) given by the definition of Lie derivative, we have 0=i_X̂Tr(L^ dnn^-1)=Tr(L^(ℒ_X̂n)n^-1) =-Tr(L^ X)+∑_'=1^mF^'_XTr(L^ V_')=-L^_X+F^_X, where we used Tr(L^ V_')=L_^IJV_',IJ=δ_,' and (<ref>). Thus, we proved F^_X=L^_X. To prove Eq.(<ref>), we first consider that ℒ_X̂(dnn^-1) = i_X̂(dnn^-1∧ dnn^-1)+d[(ℒ_X̂n)n^-1] = [-X+∑_V_ L^_X,dnn^-1]+d(-X+∑_V_ L^_X) = ∑_V_ dL^_X-[X,dnn^-1], where we used the definition of Lie derivative in the first equality, Eq.(<ref>) in the second and dV_=[dnn^-1,V_] in the third. Then, the above equation leads to 0=ℒ_X̂Tr(L^ dnn^-1) =Tr((ℒ_X̂L^-[L^,X])dnn^-1) +dL^_X by using the equalities Tr(L^ V_')=δ_,'. Further, let us take the interior product of Eq.(<ref>) with Ŷ and we get ℒ_ŶL^_X = Tr((ℒ_X̂L^-[L^,X] )(Y-∑_'V_' L^'_Y)) = ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Y(Tr((ℒ_X̂L^)V_') -Tr(L^[X,V_'])) = ℒ_X̂L^_Y-L^_[X,Y]-∑_'L^'_Yℒ_X̂(Tr(L^ V_') ), where the last term vanishes, thus we obtain the coherence identity (<ref>). To show Eq.(<ref>), let us suppose that we have another solution L'^ to the coherence identity and also the condition Tr(L'^ V_')=L'^IJ_ V_',IJ=δ_,'. Considering the 1-form ϕ^≡ -Tr(L'^ dnn^-1), one can see that its contraction with X̂ ϕ^_X≡ i_X̂ϕ^=-Tr(L'^ (ℒ_X̂n)n^-1)=L'^ _X-L^_X is the difference between the two solutions L'^ _X and L^_X. Thus, ϕ^_X is also a solution to the coherence identity (<ref>). This result together with the definition of the differential i_X̂i_Ŷdϕ^=ℒ_Ŷϕ^_X -ℒ_X̂ϕ^_Y+ϕ^_[X,Y] implies that dϕ^=0, which means that there exists a function α^ locally at least, such that ϕ^=dα^ and thus L'^_X=L^_X+ℒ_X̂α^. This proves the Eq. (<ref>). □ Finally, let us recall that the freedom in choosing the Hopf section lies in the function parameters α^(𝕍) in the expression n'(𝕍)≡ n(𝕍)e^∑_α^(𝕍)τ_ for all possible choices of the sections. By applying Eq.(<ref>) to this n', we immediately get L'^_X= L^_X+ i_X̂dα^. Referring to (<ref>), we can conclude that the function L^ is exactly the function coefficient for the component of (dn)n^-1 in the V_ direction, which is determined by a choice of the Hopf section n. §.§.§ Decomposition and sequence of the Hopf section As we will see in following part of this article, the Hopf section n and the geometric action on it are closely related to the symplectic structure and the symplectic reduction on B. To analyze the Hopf section ℚ_m more explicitly, let us consider the decomposition of the Hopf section n. Recall the definition ℚ_m:=SO(D+1)/𝕋^m, one can decompose ℚ_m as ℚ_m=𝔻_1×𝔻_2×...×𝔻_m with 𝔻_1:=SO(D+1)/(SO(2)_τ_1× SO(D-1)_[τ_1]), 𝔻_2:=SO(D-1)_[τ_1]/(SO(2)_τ_2× SO(D-3)_[τ_2]), ... 𝔻_m:=SO(D+3-2m)_[τ_(m-1)]/SO(2)_τ_m, where SO(2)_τ_ is the group generated by τ_ and SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m). Here one should notice that both of SO(2)_τ_ and SO(D+1-2)_[τ_] preserve (τ_1,...,τ_). Then, the Hopf section n can be decomposed as n=n_1n_2...n_m. This decomposition gives a sequence of the Hopf sections, which reads n_1, n_1n_2, n_1n_2n_3, ..., n_1...n_m. For a specific one n_1...n_ with ∈{1,...,m}, it gives n_1...n_: 𝔻_1×...×𝔻_→ SO(D+1) (V_1,...,V_)↦ n_1(V_1)n_2(V_1,V_2)...n_(V_1,...,V_), where V_1=n_1n_2...n_τ_1 n_^-1...n_2^-1n_1^-1=n_1τ_1 n_1^-1, V_2=n_1n_2...n_τ_2 n_^-1...n_2^-1n_1^-1=n_1n_2τ_2n_2^-1n_1^-1, ..., V_=n_1n_2...n_τ_ n_^-1...n_2^-1n_1^-1. Here one should notice that the decomposition n=n_1...n_m is not unique. For instance, one can carry out the transformation n_→ n_ g, n_+1→ g^-1n_+1 with g∈ SO(D+1) being arbitrary element which preserve (τ_1,...,τ_), and it is easy to verify that the transformation (<ref>) preserves the Hopf section n but changes n_ and n_+1 in the decomposition n=n_1...n_m. We can also establish the geometric actions on the Hopf sections n_1. Specifically, one can give (ℒ_X̂n_1)n_1^-1=-X+V_1L̅^1_X (V_1)+∑_μV̅^μ_1 L̅^μ_X(V_1) based on Eqs.(<ref>), (<ref>) and V_1=n_1τ_1n_1^-1, where V̅^μ_1=n_1τ̅^μ n_1^-1 with {τ̅^μ} being a basis of so(D-1)_τ_1, L̅^1_X (V_1)=L̅^1_IJ(V_1)X^IJ and L̅^μ_X(V_1)=L̅^μ_IJ(V_1) X^IJ are functions of V_1∈𝔻_1 <cit.>. It has been shown that L̅^1_IJ(V_1) is the solution of the equations <cit.> Tr(L̅^1 dn_1 n_1^-1)=0, Tr(L̅^1V_1)=1, and Tr(L̅^1 V̅^μ_1)=0, ∀μ. By comparing Eq.(<ref>) and Eq.(<ref>), it is easy to see that L^1=L̅^1 is a solution of L^1 in Eq.(<ref>). This result will be a key ingredient in discussions in the next section. Now, by applying the results of this section to the presymplectic form Ω_B, we will identify the Hamiltonian fields in B and compute the Poisson brackets. §.§ Computation of Hamiltonian vector fields in pre-symplectic manifold B Let us recall the pre-symplectic potential Θ_B≡1/2∑_=1^mη_Tr(V_ dnn^-1)+ ∑_=1^mη_dξ^- 1/2∑_=1^mη_Tr(Ṽ_ dññ^-1) induced from the SO(D+1) holonomy-flux phase space, which defines the pre-sympletic form Ω_B as Ω_B=-dΘ_B = 1/2∑_=1^mη_Tr(V_ dnn^-1∧ dnn^-1)-1/2∑_=1^mη_Tr(Ṽ_ dññ^-1∧ dññ^-1) -∑_=1^mdη_∧(dξ_+1/2Tr(V_ dnn^-1)-1/2Tr(Ṽ_ dññ^-1)). The associated Poisson brackets can be calculated by considering the Hamiltonian vector fields on B. Let us denote the Hamiltonian vector field for the function f as ψ_f , where f∈{η_, ξ_, p_X≡1/2∑_η_ V^_X=1/2∑_η_ V^_IJX^IJ, p̃_X≡1/2∑_η_Ṽ^_X=1/2∑_η_Ṽ^_IJX^IJ}. Then, using i_ψ_fΩ_B=-df, the vector fields could be checked to be given by ψ_p_X = X̂-∑_L^_X(𝕍)∂_ξ_, ψ_p̃_X = - X̂̃̂-∑_L^_X(𝕍̃)∂_ξ_, ψ_η_= -∂_ξ_. Here X̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍, associated to the algebra elements X. Similarly, X̂̃̂ are the vector fields generating the adjoint action on ℚ_m labelled by 𝕍̃, associated to the algebra elements X. Proof. The first equation of (<ref>) can be checked by considering i_X̂Ω_B=-1/2∑_Tr(d(η_ V_)X)+∑_dη_ L^_X(𝕍). Notice that we have i_∂_ξ_Ω_B=dη_, the first equation of (<ref>) follows immediately. The computation for ψ_p̃_X can be carried out similarly, with an opposite sign due to the reversal of the orientation. □ §.§ Reduction of the pre-symplectic manifold B Recall that in the η_m=0 region Ω_B is degenerate, as expected due to the degeneracy of the parametrization (<ref>) in the η_m= 0 region. Let us now address this degeneracy to get a true symplectic manifold. We can reduce the pre-symplectic manifold B with respect to the vector fields Ê in the kernel of Ω_B, i.e. to consider the quotient manifold B̅≡ B/Ker(Ω_B). The result would be a symplectic manifold with non-degenerate 2-form given by the quotient projection of Ω_B. In obtaining the space B̅, we can introduce the equivalence classes under the equivalence relation p∼ p' whenever p'=e^Êp, with Ê∈Ker(Ω_B) and p, p'∈ B. The operation is thus determined by the vector fields in the kernel of Ω_B. Since it is obvious that the vector fields Ê∈Ker(Ω_B) appear in the region with η_m=0, we look for the vector fields preserving the region while having the interior products with Ω_B proportional to η_. Let us first consider the vector fields Ê_X≡ψ_p_X-ψ_p̃_Y, where X∈ so(D+1), Y=-h^-1Xh with h being a group element rotating V^ to Ṽ^=-h^-1V^ h. Indeed, using the fact that V^_X=Ṽ^_Y, the interior product of the field D̂_X with the symplectic 2-form is i_Ê_XΩ_B=-1/2∑_d(η_ V^_X-η_Ṽ^_Y)-1/2∑_η_Tr(Ṽ^ dY) =-1/2∑_η_Tr([V^,X]dnn^-1). Now, let us analyze the degeneracy of i_Ê_XΩ_B. Denoted by K^ the subspace of B defined by η_=η_+1=...=η_m=0. Consider the so(D+1) valued functions F(V_1,...,V_(-1)) on K^ which satisfies n_(-1)^-1...n_2^-1n_1^-1F(V_1,...,V_(-1))n_1n_2...n_(-1)∈ so(D+3-2)_[τ_(-1)], where n_1n_2...n_(-1) determined by (V_1,...,V_(-1)) is from the sequence of the Hopf sections (<ref>), SO(D+1-2)_[τ_] is the maximal subgroup of SO(D+1) which preserves (τ_1,...,τ_) and has the Cartan subalgebra spanned by (τ_(+1),...,τ_m). Then, we can define the vector fields Ê^_F by Ê^_F:=Ê_X|_X=F(V_1,...,V_(-1)), and one can verify i_Ê^_FΩ_B=0 on K^ by using Eq.(<ref>). Thus, notice the relation K^1⊂ K^2⊂...⊂ K^m, we have Ker(Ω_B)≡{Ê^_F| ∈{1,...,m}} on K^m. Next, to find the equivalence class generated by the vector fields Ê^_F on K^, we note that the actions of the fields should rotate jointly the vectors (V_,..,V_m) and (Ṽ_,...,Ṽ_m), that is we have Ê^_F (V_')=-[F(V_1,...,V_(-1)),V_'], Ê^_F(Ṽ_')=-h^-1[F(V_1,...,V_(-1)),V_']h. Further, the actions preserves the group element h, since Ê_X(h)=-Xh-hY=0 which ensures that Ê^_F(h)=0. Therefore, given p and p' on K^, we have p'∼ p if and only if the two are related by a joint rotation in (V_,..,V_m) and (Ṽ_,...,Ṽ_m) and a h-preserving translations in (ξ_1,...,ξ_m). It is easy to see that the parametrization (<ref>) maps p and p'∼ p to the same image in T^∗_ssSO(D+1), as expected that the equivalence class generated by the vector fields Ê^_F on K^ also describes the degeneracy of the parametrization (<ref>). After the quotient with respect to Ê^_F on each K_, we are left with a manifold K̅_ parametrized by only (η_1,...,η_(-1)), (V_1,...,V_m), (Ṽ_1,...,Ṽ_(-1)) and (ξ_1,...,ξ_m). Recall that B≡ B|_η_m>0∪ K^m and K^1⊂ K^2⊂...⊂ K^m, let us define K̇^m:=K^m/Ker(Ω_B) and then the quotient space B̅≡ B|_η_m>0∪K̇^m. Finally, we conclude that the parametrization (<ref>) gives a one to one map between B̅ and its image T^∗_ssSO(D+1), and it can be extended as a symplectic-morphism with B̅ being equipped with the symplectic structure Ω_B. §.§ Poisson algebra among the twisted geometry parameters Based on the Hamiltonian vector fields given by the pre-symplectic potential Θ_B, the Poisson brackets between the twisted geometry parameters can be given by {ξ_,η_}=δ_,, {p_X, p_Y}=p_[X,Y], {p̃_X, p̃_Y}=p̃_[X,Y] {V^,η_}= {Ṽ^,η_}=0, and {V^,Ṽ^}=0. Moreover, one can show that the Poisson brackets given by Θ_B between ξ_ and p_X, or the ones between ξ_ and p̃_X are non-trivial, and they are given by the function L^: ℚ_m→ so(D+1) in the form {ξ_,p_X}= L^_X(𝕍), {ξ_,p̃_X}= L^_X(𝕍̃), where L^_X≡Tr(L^ X) is the component of L^ along the algebra element X. Especially, the Eqs. (<ref>) taken as the definition equations of the functions L^, together with the Poisson brackets (<ref>), already determined L^ to be exactly the results of the brackets {ξ_,p_X} and {ξ_,p̃_X} given by the potential Θ_B corresponding to our choice of the Hopf sections. This result can be shown by the fact that, the function L^ defined by Eqs.(<ref>) is constrained by two conditions given by the above Poisson brackets (<ref>), and these two conditions are exactly the definition of L^ in Lemma in section <ref>. Let us then illustrate the details of this fact as follows. The first one of the two conditions comes from the equation p_IJL_^IJ=p_IJ{ξ_,p^IJ}=1/2{ξ_,p^IJp_IJ}= 1/4{ξ_,∑_η^2_} =1/2η_, with p_IJ:=1/2∑_(η_ V^_IJ), which gives the normalization condition L_^IJV^_IJ=δ_^ in Lemma in section <ref>. The second one of the two conditions just comes from the Jacobi identity {ξ_,{p_X,p_Y}}+{p_X,{p_Y,ξ_}}+{p_Y,{ξ_,p_X}}=0, from which we get L^_[X,Y]-{p_X,L_Y^}+{p_Y,L_X^}=0, By using {p_X,L_Y^}=i_ψ_p_XdL_Y^=ℒ_X̂L_Y^, one can write the identity (<ref>) as an identity involving Lie derivatives and we get ℒ_X̂L^_Y-ℒ_ŶL^_X=L^_[X,Y], which is just the coherence identity in Lemma in section <ref>. Now, it is easy to see these two conditions makes the Lemma in section <ref> applicable and we can verify the result given in the beginning of this paragraph. § RELATION WITH THE TWISTED GEOMETRY PARAMETRIZATIONS ON EDGE SIMPLICITY CONSTRAINT SURFACE The twisted geometry parametrization introduce in this article is constructed in the space ×_e∈γT^∗_ssSO(D+1)_e, and we also have introduced the twisted geometry parametrization of the edge simplicity constraint surface ×_e∈γT^∗_esSO(D+1)_e in our companion paper <cit.>. Thus, it is worth to discuss the relation between these two types of parametrizations. We also focus on the twisted geometry parametrizations of the space T^∗_ssSO(D+1) on a single edge without loss of generality. Then, by setting η_2=...=η_m=0 in Eq.(<ref>), we get X=1/2η_1nτ_1n^-1 which parametrizes all of the simple fluxes satisfying X^[IJX^KL]=0 in so(D+1). Besides, recall the decomposition n=n_1...n_m of the Hopf section n, we get X = 1/2η_1n_1τ_1n_1^-1 h = n_1e^ξ^1τ_1n̅ñ_1^-1 with n̅=n_2...n_me^ξ^2τ_2...e^ξ^mτ_m(ñ_2...ñ_m)^-1. Recall the edge simplicity constraint surface T_es^∗ SO(D+1) defined by T_es^∗ SO(D+1)={(h,X)∈ T^∗ SO(D+1)|X^[IJX^KL]=0}, it is easy to see that T_es^∗ SO(D+1)⊂ T_ss^∗ SO(D+1) is parametrized by (η_1,ξ_1, V_1, Ṽ_1, n̅) based on Eq.(<ref>), where V_1=n_1τ_1n_1^-1, Ṽ_1=ñ_1τ_1ñ_1^-1 with the Hopf sections n_1 and ñ_1 being given by the decompositions n=n_1...n_m and ñ=ñ_1...ñ_m respectively. Thus, by restricting the consideration on the edge simplicity constraint surface, the parametrization (<ref>) reproduces the twisted geometry parametrization introduced in our companion paper <cit.>. We can further consider the symplectic reduction with respect to the edge simplicity constraint, which can be expressed as 𝒮_IJKL≡ p_[IJp_KL]=0 with p_IJ:=1/2∑_η_ V^_IJ in twisted geometry parameters. Notice that the Hamiltonian vector field of edge simplicity constraint is spanned by ψ^𝒮_IJKL=2p_[IJ(X̂_KL]-∑_L^_KL]∂ _ξ_), where X̂_KL is the vector field generating the adjoint action of X_KL on ℚ_m labelled by 𝕍, with X_KL is the so(D+1) algebra element given by X_KL≡ X^IJ_KL=δ^I_[Kδ^J_L]. It is easy to verify that the vector field (<ref>) only induces the transformation of holonomy on the edge simplicity constraint surface, which reads ℒ_α^IJKLψ^𝒮_IJKLh= 1/2η_1 α^IJKLV^1_[IJτ_KL]h= 1/2η_1 α̅^KLn_1(τ̅_KLn̅)e^ξ^1τ_1n_1^-1, where α^IJKL is an arbitrary tensor satisfying α^IJKL=α^[IJKL] and α̅^KLτ̅_KL≡α^IJKLV^1_[IJ(n^-1_1τ_KL]n_1)∈ so(D-1)_τ_1. Thus, the component n̅ is just the gauge component with respect to edge simplicity constraint. By reducing the edge simplicity constraint surface with respect to the gauge orbit generated by ψ^𝒮_IJKL, we get the simplicity reduced phase space B_es given by B_es≡ℝ_+× S^1×𝔻_1×𝔻̃_1 ≡{(η_1,ξ_1,V_1, Ṽ_1)}, where η_1∈ [0,+∞), ξ_1∈[-π,π), V_1∈𝔻_1, Ṽ_1∈𝔻̃_1 with 𝔻_1 and 𝔻̃_1 are defined by Eq.(<ref>). Correspondingly, the reduced symplectic structure on B_es gives the Poisson brackets {p̅_X, p̅_Y}= p̅_[X,Y], {p̃̅̃_X, p̃̅̃_Y}=p̃̅̃_[X,Y], {ξ_1,η_1}=1, where p̅_X≡1/2η_1V^1_X=1/2η_1 V^1_IJX^IJ and p̃̅̃_X≡1/2η_1Ṽ^1_X=1/2η_1Ṽ^1_IJX^IJ. Specifically, the Poisson bracket between ξ_1 and (p̅_X, p̃̅̃_X) are given by {ξ_1, p̅_X}=L^1_X(𝕍), {ξ_1, p̃̅̃_X}=L^1_X(𝕍̃). Notice these Poisson brackets is not independent of (V_2,..V_m) and (Ṽ_2,...,Ṽ_m), since ξ_1 contains the information of the choices of the Hopf section n and ñ which depend on 𝕍 and 𝕍̃. Recall the result of section <ref>, by using the decomposition n=n_1...n_m and ñ=ñ_1...ñ_m, one can choose the Hopf sections n and ñ to ensure that L^1(𝕍)=L̅^1(V_1), and L^1(𝕍̃)=L̅^1(Ṽ_1). Then, the symplectic structure on reduce phase space B_es is given by the Eqs.(<ref>), (<ref>) and (<ref>), which is identical with that given in our companion paper <cit.>. Further, the gauge reduction with respect to Gaussian constraint and the treatment of vertex simplicity constraint can be carried out following the same procedures as that in <cit.>. § CONCLUSION AND OUTLOOK The realization of gauge fixing in quantum gauge reduction and the Fermion coupling in all dimensional LQG require us to construct the coherent state in the full Hilbert space which involving the non-simple representations of SO(D+1). Following previous experiences, it is reasonable to consider the generalized twisted geometry coherent state and thus it is necessary to establish the twisted geometry parametrization of the full SO(D+1) holonomy-flux phase space. We established the generalized twisted geometry parametrization for a dense subspace of the full SO(D+1) holonomy-flux phase space. In particular, the twisted geometry parameters are adapted to the splitting of the Ashtekar connection to capture the degrees of freedom of the intrinsic and extrinsic part of the spatial geometry respectively. Moreover, the symplectic structure on the SO(D+1) holonomy-flux phase space is re-expressed based on the twisted geometry parameters. Through studying the properties of the Hopf sections in SO(D+1) Hopf fibre bundle, we obtained the Poisson algebra among the twisted geometry parameters. Especially, the relation between the twisted geometry parametrizations for the edge simplicity constraint surface and the dense subspace ×_e∈γT^∗_ss SO(D+1)_e are discussed. We pointed out that the twisted geometry parametrizations for ×_e∈γT^∗_ss SO(D+1)_e is equivalent to that for the edge simplicity constraint surface by carrying out the gauge reduction with respect to the edge simplicity constraint, which ensures that the treatment of the anomalous vertex simplicity constraint proposed in our companion paper <cit.> are still valid for the more general case considered in this article. The twisted geometry parametrizations for the dense subspace ×_e∈γT^∗_ss SO(D+1)_e provides us the tool which is necessary to construct the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. More explicitly, similar to the construction of twisted geometry coherent state in the solution space of edge simplicity constraint, one could decompose the heat-kernel coherent state of SO(D+1) based on the twisted geometry parametrization for ×_e∈γT^∗_ss SO(D+1)_e, and then select the terms dominated by the highest and lowest weight in each representation of SO(D+1), to form the twisted geometry coherent state in the full Hilbert space of all dimensional LQG. This will be the subject of a follow up work <cit.>. It should be remarked that the twisted geometry parametrization of the SO(D+1) holonomy-flux phase space are also valid for general SO(D+1) Yang-Mills gauge theory. Though the “geometry” may be meaningless out of the framework of gravity theory, the twisted geometry parameters provide a new perspective to analyze the Poisson structure of the SO(D+1) holonomy-flux phase space, which could help us to understand the quantum aspects of corresponding SO(D+1) Yang-Mills gauge theory. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (NSFC) with Grants No. 12047519, No. 11775082, No. 11875006 and No. 11961131013. unsrt
http://arxiv.org/abs/2307.04831v1
20230710181058
Single-Inclusive Particle Production from $pA$ Collision at Next-to-Leading Order
[ "Heikki Mäntysaari", "Yossathorn Tawabutr" ]
hep-ph
[ "hep-ph", "nucl-ex", "nucl-th" ]
[figure]justification=justified,singlelinecheck=false [subfigure]justification=centering justified =6.0in =8.25in =-0.3in =-0.20in #1 #1 #1 #1 #1 #1 and #1 Submitted to #1 Abstract Presented PRESENTED AT Single-Inclusive Particle Production from pA Collision at Next-to-Leading Order Heikki Mäntysaari Yossathorn Tawabutr Department of Physics, University of Jyväskylä, P.O. Box 35, 40014 University of Jyväskylä, Finland Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki, Finland We present the first fully consistent NLO calculation of the single-inclusive forward hadron production in proton-nucleus (pA) collisions under the color glass condensate (CGC) framework. In the dilute-dense limit, the NLO cross-section can be written as a convolution of the NLO impact factor, NLO parton distribution function (PDF), NLO fragmentation function (FF) and dipole-target scattering amplitude which satisfies the NLO small-x Balitsky-Kovchegov (BK) evolution. We demonstrate that, without the NLO corrections to the impact factor, we obtain a significant Cronin peak when the dipole amplitude satisfies the NLO BK equation. This would contradict the recent LHCb results <cit.>. However, the Cronin peak becomes suppressed when the NLO correction to the impact factor is included. This is the main result of this work. The dependence on resummation schemes for the NLO BK evolution will also be discussed. DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023 < g r a p h i c s > § INTRODUCTION This article is based on the work presented in <cit.>, which is in preparation. Single-inclusive hadron productions in forward proton-proton (pp) or proton-nucleus (pA) collisions at high energy can be expressed using the CGC formalism <cit.> in terms of the unintegrated gluon distribution <cit.>. This opens up an opportunity to compare CGC calculations against experimental measurements in order to probe the small-x structure of protons and nuclei <cit.>. As a result, forward hadron productions in pp and pA collisions have been an active area of study for more than two decades <cit.>. Consider a collision in the center-of-mass frame between a proton and a nucleon from the nucleus, which could be another proton, such that a forward parton from the proton – with a large longitudinal momentum fraction, x_p – interacts with a parton from the nucleus with small longitudinal momentum fraction, X_g. As the collision takes place, the forward parton receives a transverse momentum, k_⊥, but remains forward. Eventually, it fragments into a hadron. A direct calculation of the kinematics allows us to write x_p = k_⊥/√(s) e^y and X_g = k_⊥/√(s) e^-y , where y is the rapidity and s is the squared center-of-mass energy per nucleon of the pA collision. In this “dilute-dense” framework with k_⊥ greater than the saturation momentum, Q_s, the “hybrid formalism” applies <cit.>, allowing us to write the hadron production cross section as a convolution of PDFs – q_f(x_p) for quarks and g(x_p) for gluons – FFs – D_h/f(z) for quarks and D_h/g(z) for gluons – and the unintegrated gluon distribution, the Fourier transform of the dipole amplitude <cit.>. At the leading order (LO), we have <cit.> dσ_pA→ hX/d^2p_⊥ dy = ∫dz/z^2∫d^2x_0 d^2x_1/(2π)^2 e^-ik·(x_0-x_1)[∑_fx_pq_f(x_p) D_h/f(z) 1/N_c⟨tr[V_0V_1^†]⟩(X_g) . + . x_pg(x_p) D_h/g(z) 1/N_c^2-1⟨Tr[U_0U_1^†]⟩(X_g) ] , where N_c is the number of quark colors and ⟨⋯⟩(X_g) is the “CGC averaging” <cit.> over the target nucleus's quantum state evaluated at X_g. Finally, with the notation that x = (x^1,x^2) is a transverse vector, V_n≡ V_x_n = 𝒫 exp[ig∫_-∞^∞dx^-t^aA^+a(x^+=0,x^-,x_n) ] , U_n≡ U_x_n = 𝒫 exp[ig∫_-∞^∞dx^-T^aA^+a(x^+=0,x^-,x_n) ] , are the fundamental and adjoint light-cone Wilson lines, respectively, with 𝒫 being the path-ordering operator. Throughout this article, we employ the light-cone coordinates such that x^±=(x^0± x^3)/√(2). The first term in the square brackets of Eq. (<ref>) corresponds to the “quark channel,” while the second term corresponds to the “gluon channel” <cit.>. Eq. (<ref>) receives next-to-leading-order (NLO) corrections from an emission of a “primary parton” either before or after the interaction with the target. The resulting cross section follows from a direct calculation in the light-cone perturbation theory (LCPT) <cit.>. For single-inclusive cross-sections, we integrate over the transverse position of one of the two outgoing partons, while keeping track of the other parton as it fragments into a hadron <cit.>. This leads to 4 different NLO channels – qq, qg, gq and gg – denoting the incoming parton and the outgoing parton we track, respectively. The resulting NLO expression will be omitted in this article for brevity.[See <cit.> for the full expression and its derivation.] The NLO correction introduced above only concerns the “impact factor.” Additionally, each of the PDF, FF and dipole amplitude that enter Eq. (<ref>) receive NLO corrections. For the dipole amplitude, the corrections come through the high-energy BK evolution <cit.>. In this work, we perform for the first time the full-NLO calculation of single-inclusive hadron productions, including all NLO corrections outlined above <cit.>. The ingredients of our calculation are detailed below, followed by the preliminary results and the comparison with the recent LHCb's forward pPb π^0 production data <cit.>. § INGREDIENTS As mentioned previously, the dipole amplitude for the pp collision is taken from <cit.>, in which the NLO BK evolution <cit.> is applied to the initial condition given by the MV^γ model, S^(0)(x_0,x_1) ≡1/N_c⟨tr[V_0V_1^†]⟩(x_0) = exp[ - 1/4(x_10^2Q^2_s,0)^γln(1/x_10Λ + e) ] at initial value, x_0=0.01. Here, x_10 = |x_1-x_0| and Λ = 0.241 GeV is the QCD scale, while γ and Q_s,0 are the model parameters determined by fitting the evolved dipole amplitude to the HERA structure function data <cit.>. Note that the addition by e inside the logarithm is so that the infrared divergence is regulated. Since there are several schemes to resum the double-logarithmic terms in the NLO BK evolution, we follow the approach of <cit.> and perform the cross-section calculation separately for each resummation scheme, using the fitted parameters from <cit.> to obtain the dipole amplitude. A comparison of the resulting cross-section is given in Section <ref>. For the pA case, we employ the optical Glauber model introduced in <cit.>, which gives the following initial condition for the dipole amplitude in the pA case, S^(0)_pA(x_0,x_1;b_⊥) = exp[ - 1/4 σ_0/2AT_A(b_⊥)(x_10^2Q^2_s,0)^γln(1/x_10Λ + e) ] , where σ_0/2 is the transverse area of a proton and A is the mass number of the nucleus. Here, T_A(b_⊥) is the transverse thickness function of the nucleus, which can be obtained from the Woods-Saxon distribution of nuclear density. Eq. (<ref>) depends on the impact parameter, b_⊥, of the pA collision, in addition to the model parameters, γ and Q_s,0. From there, the NLO BK evolution is applied separately for each b_⊥ to obtain the evolved dipole amplitude that are eventually used to calculate the particle production yield in the pA collision as a function of b_⊥. Then, we integrate over b_⊥ weighted by the average number of binary collisions to obtain the overall pA cross-section <cit.>. The dipole amplitude in each case is convoluted with the PDF and FF. For the PDF, we employ the Martin-Stirling-Thorne-Watt (MSTW) PDF at NLO <cit.> through the LHAPDF library <cit.>. As for the FF, we use the de Florian-Sassot-Stratmann (DSS) results at NLO <cit.>. Finally, the NLO impact factor has collinear and rapidity divergences. The former is subtracted by the DGLAP evolution of the PDF and FF <cit.>. In <cit.>, the rapidity divergences are subtracted by the LO BK evolution of the dipole amplitude. However, in <cit.>, it is shown that one could leave the rapidity divergence in the NLO impact factor while evaluating the LO impact factor terms at the initial condition, X_g = x_0. This is called the “unsubtracted scheme,” and it is theoretically more exact because it does not require subtracting and adding a potentially large contribution, which can cause problems when the running coupling is at play <cit.>. In this work, we employ the momentum-space running coupling prescription for the impact factor, making the unsubtracted scheme a better choice. With all the ingredients specified above, we calculate the single-inclusive π^0 production cross-section in pPb collisions at the full NLO level, whose results are presented in the next section. Note that this is a novel development. For the first time, the dipole amplitude fitted to the data using NLO BK evolution <cit.> is employed in such calculations.[In <cit.>, the NLO corrections to the BK evolution of the dipole amplitude are not included.] § RESULTS §.§ Hadron Production Spectrum We perform the calculation at LHCb's kinematics, with center-of-mass energy, √(s) = 8.16 TeV, and rapidity, y=3, using two different resummation schemes in the NLO BK evolution of the dipole: (i) kinematically-constrained BK (KCBK) <cit.> and (ii) local-rapidity resummed BK (ResumBK) <cit.>. Respectively, the resulting π^0 cross-sections for the two resummation schemes are shown in Figure <ref>. There, the error bands are constructed by varying the factorization scale such that μ = 2p_⊥,4p_⊥,8p_⊥.[In <cit.>, the cross-section appears to be stable only for μ≳ 2p_⊥.] From Figure <ref>, we see that our spectra differ very slightly across the resummation schemes. On a more unfortunate note, they significantly overestimate the LHCb results. However, the functional form seems to be similar, with the discrepancy coming mainly from an overall factor. We suspect that the mismatch may result from a problem when the model with parameters fitted from HERA data is generalized to pA collisions using the optical Glauber model <cit.>. The issue will be studied in a future work. For the remainder of the article, we will only consider the b_⊥=0 case where the potential issues with pA dipoles are not as severe. §.§ Nuclear Modification Factor: Cronin Effect Despite the mismatch in our pPb spectra with the LHCb measurement, the most striking results of our calculation are in the nuclear modification factor, which is defined in the case of b_⊥=0 as R_pA = dN_pA→ hX/d^2p_⊥dy/[N_bin|_b_⊥=0] dN_pp→ hX/d^2p_⊥dy , where N_pp/pA→ hX is the particle production yield and N_bin|_b_⊥=0 is the number of binary collisions in pA collisions at b_⊥=0. The factor, R_pA, allows for a direct comparison between the pp and pA cross-sections, in such the way that R_pA=1 would imply that the nucleus behaved in the context of a pA collision as if it were A separate protons. As mentioned above, we only consider the pA collisions at b_⊥=0. The results for both resummation schemes are shown in Figure <ref>. With LO impact factor but PDF, FF and dipole at NLO (the orange bands), we see a clear Cronin effect around p_⊥≈ 4 - 5 GeV, which is larger with the KCBK evolution. However, in the full-NLO case (the blue bands), the Cronin peak disappears, and the discrepancy between KCBK and ResumBK results become much smaller. The former is especially desirable because the R_pPb measurement from LHCb displays no Cronin peak <cit.>. This result is of great importance – if the NLO corrections to the dipole's evolution is to be included, then the NLO corrections must consistently be included everywhere else: the impact factor, PDF and FF. § CONCLUSION AND OUTLOOK For the first time, we employ the CGC framework to compute the hadron production cross section in pA collisions at the full NLO accuracy consistently with the DIS data. The main result of this work is that the NLO corrections to the impact factor are essential to remove the Cronin effect at moderate hadron’s transverse momentum, p_⊥. Furthermore, the discrepancy in R_pA due to the NLO BK resummation scheme becomes suppressed in the full-NLO case where the impact factor is also at NLO. There remains a significant discrepancy between our pA spectra and the LHCb results <cit.>, possibly due to the dipole fit and its generalization to pA collisions. The issue will be investigated further in a future work. In light of upcoming forward scattering measurements <cit.>, the dependence of our results on the rapidity, y, will also be studied. Last but not least, as an additional cross-check, our calculation will be repeated with the target momentum fraction BK (TBK) evolution <cit.>, which is another available resummation scheme of the NLO BK evolution. § ACKNOWLEDGMENTS YT would like to thank Dr. Tuomas Lappi for helpful discussions and the DIS2023 organizers for the opportunity to present the work. The authors are supported by the Academy of Finland, the Centre of Excellence in Quark Matter, and projects 338263 and 346567, under the European Union’s Horizon 2020 research and innovation programme by the European Research Council (ERC, grant agreement No. ERC-2018-ADG-835105 YoctoLHC) and by the STRONG-2020 project (grant agreement No. 824093). The content of this article does not reflect the official opinion of the European Union and responsibility for the information and views expressed therein lies entirely with the authors. 10 LHCb:2022vfn LHCb collaboration, Nuclear modification factor of neutral pions in the forward and backward regions in pPb collisions, [https://arxiv.org/abs/2204.106082204.10608]. NLOsinc H. Mäntysaari and Y. Tawabutr, in preparation, 2023. Mueller:1989st A. H. Mueller, Small x Behavior and Parton Saturation: A QCD Model, https://doi.org/10.1016/0550-3213(90)90173-BNucl. Phys. B 335 (1990) 115–137. Mueller:1993rr A. H. Mueller, Soft gluons in the infinite momentum wave function and the BFKL pomeron, https://doi.org/10.1016/0550-3213(94)90116-3Nucl. Phys. B 415 (1994) 373–385. Balitsky:1995ub I. Balitsky, Operator expansion for high-energy scattering, https://doi.org/10.1016/0550-3213(95)00638-9Nucl. Phys. B 463 (1996) 99–160, [https://arxiv.org/abs/hep-ph/9509348hep-ph/9509348]. Gelis:2010nm F. Gelis, E. Iancu, J. Jalilian-Marian and R. Venugopalan, The Color Glass Condensate, https://doi.org/10.1146/annurev.nucl.010909.083629Ann. Rev. Nucl. Part. Sci. 60 (2010) 463–489, [https://arxiv.org/abs/1002.03331002.0333]. Dumitru:2002qt A. Dumitru and J. Jalilian-Marian, Forward quark jets from protons shattering the colored glass, https://doi.org/10.1103/PhysRevLett.89.022301Phys. Rev. Lett. 89 (2002) 022301, [https://arxiv.org/abs/hep-ph/0204028hep-ph/0204028]. Dumitru:2005gt A. Dumitru, A. Hayashigaki and J. Jalilian-Marian, The Color glass condensate and hadron production in the forward region, https://doi.org/10.1016/j.nuclphysa.2005.11.014Nucl. Phys. A 765 (2006) 464–482, [https://arxiv.org/abs/hep-ph/0506308hep-ph/0506308]. Chirilli:2011km G. A. Chirilli, B.-W. Xiao and F. Yuan, One-loop Factorization for Inclusive Hadron Production in pA Collisions in the Saturation Formalism, https://doi.org/10.1103/PhysRevLett.108.122301Phys. Rev. Lett. 108 (2012) 122301, [https://arxiv.org/abs/1112.10611112.1061]. Chirilli:2012jd G. A. Chirilli, B.-W. Xiao and F. Yuan, Inclusive Hadron Productions in pA Collisions, https://doi.org/10.1103/PhysRevD.86.054005Phys. Rev. D 86 (2012) 054005, [https://arxiv.org/abs/1203.61391203.6139]. Stasto:2013cha A. M. Stasto, B.-W. Xiao and D. Zaslavsky, Towards the Test of Saturation Physics Beyond Leading Logarithm, https://doi.org/10.1103/PhysRevLett.112.012302Phys. Rev. Lett. 112 (2014) 012302, [https://arxiv.org/abs/1307.40571307.4057]. Watanabe:2015tja K. Watanabe, B.-W. Xiao, F. Yuan and D. Zaslavsky, Implementing the exact kinematical constraint in the saturation formalism, https://doi.org/10.1103/PhysRevD.92.034026Phys. Rev. D 92 (2015) 034026, [https://arxiv.org/abs/1505.051831505.05183]. Shi:2021hwx Y. Shi, L. Wang, S.-Y. Wei and B.-W. Xiao, Pursuing the Precision Study for Color Glass Condensate in Forward Hadron Productions, https://doi.org/10.1103/PhysRevLett.128.202302Phys. Rev. Lett. 128 (2022) 202302, [https://arxiv.org/abs/2112.069752112.06975]. Altinoluk:2011qy T. Altinoluk and A. Kovner, Particle Production at High Energy and Large Transverse Momentum - 'The Hybrid Formalism' Revisited, https://doi.org/10.1103/PhysRevD.83.105004Phys. Rev. D 83 (2011) 105004, [https://arxiv.org/abs/1102.53271102.5327]. Lappi:2013zma T. Lappi and H. Mäntysaari, Single inclusive particle production at high energy from HERA data to proton-nucleus collisions, https://doi.org/10.1103/PhysRevD.88.114020Phys. Rev. D 88 (2013) 114020, [https://arxiv.org/abs/1309.69631309.6963]. Altinoluk:2014eka T. Altinoluk, N. Armesto, G. Beuf, A. Kovner and M. Lublinsky, Single-inclusive particle production in proton-nucleus collisions at next-to-leading order in the hybrid formalism, https://doi.org/10.1103/PhysRevD.91.094016Phys. Rev. D 91 (2015) 094016, [https://arxiv.org/abs/1411.28691411.2869]. Ducloue:2017dit B. Ducloué, E. Iancu, T. Lappi, A. H. Mueller, G. Soyez, D. N. Triantafyllopoulos et al., Use of a running coupling in the NLO calculation of forward hadron production, https://doi.org/10.1103/PhysRevD.97.054020Phys. Rev. D 97 (2018) 054020, [https://arxiv.org/abs/1712.074801712.07480]. Liu:2019iml H.-Y. Liu, Y.-Q. Ma and K.-T. Chao, Improvement for Color Glass Condensate factorization: single hadron production in pA collisions at next-to-leading order, https://doi.org/10.1103/PhysRevD.100.071503Phys. Rev. D 100 (2019) 071503, [https://arxiv.org/abs/1909.023701909.02370]. Kang:2019ysm Z.-B. Kang and X. Liu, Power Counting the Small-x Observables, [https://arxiv.org/abs/1910.101661910.10166]. Liu:2020mpy H.-Y. Liu, Z.-B. Kang and X. Liu, Threshold resummation for hadron production in the small-x region, https://doi.org/10.1103/PhysRevD.102.051502Phys. Rev. D 102 (2020) 051502, [https://arxiv.org/abs/2004.119902004.11990]. Kovchegov:2001sc Y. V. Kovchegov and K. Tuchin, Inclusive gluon production in DIS at high parton density, https://doi.org/10.1103/PhysRevD.65.074026Phys. Rev. D 65 (2002) 074026, [https://arxiv.org/abs/hep-ph/0111362hep-ph/0111362]. Kovchegov:2012mbw Y. V. Kovchegov and E. Levin, Quantum Chromodynamics at High Energy, vol. 33. Cambridge University Press, 2012. Lepage:1980fj G. P. Lepage and S. J. Brodsky, Exclusive Processes in Perturbative Quantum Chromodynamics, https://doi.org/10.1103/PhysRevD.22.2157Phys. Rev. D 22 (1980) 2157. Brodsky:1989pv S. J. Brodsky and G. P. Lepage, Exclusive Processes in Quantum Chromodynamics, https://doi.org/10.1142/9789814503266_0002Adv. Ser. Direct. High Energy Phys. 5 (1989) 93–240. Balitsky:1997mk I. Balitsky, Operator expansion for diffractive high-energy scattering, https://doi.org/10.1063/1.53693AIP Conf. Proc. 407 (1997) 953, [https://arxiv.org/abs/hep-ph/9706411hep-ph/9706411]. Kovchegov:1999yj Y. V. Kovchegov, Small-x F_2 structure function of a nucleus including multiple pomeron exchanges, https://doi.org/10.1103/PhysRevD.60.034008Phys. Rev. D 60 (1999) 034008, [https://arxiv.org/abs/hep-ph/9901281hep-ph/9901281]. Kovchegov:1999ua Y. V. Kovchegov, Unitarization of the BFKL pomeron on a nucleus, https://doi.org/10.1103/PhysRevD.61.074018Phys. Rev. D 61 (2000) 074018, [https://arxiv.org/abs/hep-ph/9905214hep-ph/9905214]. Balitsky:2007feb I. Balitsky and G. A. Chirilli, Next-to-leading order evolution of color dipoles, https://doi.org/10.1103/PhysRevD.77.014019Phys. Rev. D 77 (2008) 014019, [https://arxiv.org/abs/0710.43300710.4330]. Beuf:2020dxl G. Beuf, H. Hänninen, T. Lappi and H. Mäntysaari, Color Glass Condensate at next-to-leading order meets HERA data, https://doi.org/10.1103/PhysRevD.102.074028Phys. Rev. D 102 (2020) 074028, [https://arxiv.org/abs/2007.016452007.01645]. Beuf:2014uia G. Beuf, Improving the kinematics for low-x QCD evolution equations in coordinate space, https://doi.org/10.1103/PhysRevD.89.074039Phys. Rev. D 89 (2014) 074039, [https://arxiv.org/abs/1401.03131401.0313]. Iancu:2015vea E. Iancu, J. D. Madrigal, A. H. Mueller, G. Soyez and D. N. Triantafyllopoulos, Resumming double logarithms in the QCD evolution of color dipoles, https://doi.org/10.1016/j.physletb.2015.03.068Phys. Lett. B 744 (2015) 293–302, [https://arxiv.org/abs/1502.056421502.05642]. Ducloue:2019ezk B. Ducloué, E. Iancu, A. H. Mueller, G. Soyez and D. N. Triantafyllopoulos, Non-linear evolution in QCD at high-energy beyond leading order, https://doi.org/10.1007/JHEP04(2019)081JHEP 04 (2019) 081, [https://arxiv.org/abs/1902.066371902.06637]. H1:2009pze H1, ZEUS collaboration, F. D. Aaron et al., Combined Measurement and QCD Analysis of the Inclusive e+- p Scattering Cross Sections at HERA, https://doi.org/10.1007/JHEP01(2010)109JHEP 01 (2010) 109, [https://arxiv.org/abs/0911.08840911.0884]. H1:2012xnw H1, ZEUS collaboration, H. Abramowicz et al., Combination and QCD Analysis of Charm Production Cross Section Measurements in Deep-Inelastic ep Scattering at HERA, https://doi.org/10.1140/epjc/s10052-013-2311-3Eur. Phys. J. C 73 (2013) 2311, [https://arxiv.org/abs/1211.11821211.1182]. H1:2015ubc H1, ZEUS collaboration, H. Abramowicz et al., Combination of measurements of inclusive deep inelastic e^±p scattering cross sections and QCD analysis of HERA data, https://doi.org/10.1140/epjc/s10052-015-3710-4Eur. Phys. J. C 75 (2015) 580, [https://arxiv.org/abs/1506.060421506.06042]. H1:2018flt H1, ZEUS collaboration, H. Abramowicz et al., Combination and QCD analysis of charm and beauty production cross-section measurements in deep inelastic ep scattering at HERA, https://doi.org/10.1140/epjc/s10052-018-5848-3Eur. Phys. J. C 78 (2018) 473, [https://arxiv.org/abs/1804.010191804.01019]. Martin:2009iq A. D. Martin, W. J. Stirling, R. S. Thorne and G. Watt, Parton distributions for the LHC, https://doi.org/10.1140/epjc/s10052-009-1072-5Eur. Phys. J. C 63 (2009) 189–285, [https://arxiv.org/abs/0901.00020901.0002]. Buckley:2014ana A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht et al., LHAPDF6: parton density access in the LHC precision era, https://doi.org/10.1140/epjc/s10052-015-3318-8Eur. Phys. J. C 75 (2015) 132, [https://arxiv.org/abs/1412.74201412.7420]. deFlorian:2007aj D. de Florian, R. Sassot and M. Stratmann, Global analysis of fragmentation functions for pions and kaons and their uncertainties, https://doi.org/10.1103/PhysRevD.75.114010Phys. Rev. D 75 (2007) 114010, [https://arxiv.org/abs/hep-ph/0703242hep-ph/0703242]. ALICE:2023fov ALICE collaboration, Physics of the ALICE Forward Calorimeter upgrade, ALICE-PUBLIC-2023-001 (2023).
http://arxiv.org/abs/2307.04884v1
20230710201235
A $q$-Chaundy representation for the product of two nonterminating basic hypergeometric series and its symmetric generating functions
[ "Howard S. Cohl", "Roberto S. Costas-Santos" ]
math.CA
[ "math.CA", "33D45, 05A15, 42C05, 33D15" ]
-1.6cm black thmTheorem[section] cor[thm]Corollary con[thm]Conjecture rem[thm]Remark lem[thm]Lemma defn[thm]Definition prop[thm]Proposition equationcurrentlabel=eqnswtrue centering =eqncr toeqcnt@ @## eqcntne ## eqcnt@ ##centering ##@@eqncr@̧equation@ne ignoretrueyeqncrifnextchar [xeqncrxeqncr[5pt]=0pt _ `_= _#1-2#1 @̌mathfonts 132=3.5pt 142=3.5pt 162=2.5pt 172=2.5pt ∅∅∅ q-Chaundy product representation and its symmetric generating functions A q-Chaundy representation for the product of two nonterminating basic hypergeometric series and its symmetric generating functions Howard S. Cohl^∗ and Roberto S. Costas-Santos^† H. S. Cohl and R. S. Costas-Santos ^∗ Applied and Computational Mathematics Division, National Institute of Standards and Tech­no­lo­gy, Mission Viejo, CA 92694, USA http://www.nist.gov/itl/math/msg/howard-s-cohl.cfmhttp://www.nist.gov/itl/math/msg/howard-s-cohl.cfm [email protected]^† Department of Quantitative Methods, Universidad Loyola Andalucía, Sevilla, Spain http://www.rscosan.comhttp://[email protected] Received August 12, 2023 in final form ????; Published online ???? We derive double product representations of nonterminating basic hypergeometric series using diagonalization, a method introduced by Theo William Chaundy in 1943. We also present some generating functions that arise from it in the q and q-inverse Askey schemes. Using this q-Chaundy theorem which expresses a product of two nonterminating basic hypergeometric series as a sum over a terminating basic hypergeometric series, we study generating functions for the symmetric families of orthogonal polynomials in the q and q-inverse Askey scheme. By applying the q-Chaundy theorem to q-exponential generating functions due to Ismail, we are able to derive alternative expansions of these generating functions and from these, new representations for the continuous q-Hermite and q-inverse Hermite polynomials which are connected by a quadratic transformation for the terminating basic hypergeometric series representations. basic hypergeometric functions; generating functions; orthogonal polynomials; q-Askey scheme; nonterminating representations; terminating representations 33D45, 05A15, 42C05, 33D15 § INTRODUCTION In this paper we exploit a method introduced by Theo William Chaundy in 1943 (see <cit.>) for re-expressing double summation nonterminating expressions in terms of an infinite sum of terminating expressions. This method is sometimes referred to as diagonal summation or simply diagonalization. Chaundy applied this method to re-write products of generalized hypergeometric series. Sometimes the formulas which result from this method for a product of two generalized hypergeometric functions lead to very beautiful representations in terms of a single generalized hypergeometric series. For several nice examples, see for instance Clausen's formula <cit.> (see also <cit.>) {21a,ba+b+1/2z}^2 =322a,2b,a+ba+b+1/2,2a+2bz, and Bailey's formula <cit.> 11a2az11b2b-z=231/2(a+b),1/2(a+b+1)a+1/2,b+1/2,a+b1/4z^2. Other nice examples can be found in <cit.>. The goal of this paper is to extend the general results which were conceived of by Chaundy to the q-realm and to investigate some of their applications. § PRELIMINARIES We adopt the following set notations: ℕ_0:={0}∪={0, 1, 2,…}, and we use the sets ℤ, ℝ, ℂ which represent the integers, real numbers and complex numbers respectively, :=∖{0}, and :={z∈: |z|<1}. We adopt the following conventions for succinctly writing elements of sets. To indicate sequential positive and negative elements, we write ± a:={a,-a}. We also adopt an analogous notation z^±:={z,z^-1}. Consider q∈, n∈ℕ_0. Define the sets Ω_q^n:={q^-k: k∈ℕ_0, 0≤ k≤ n-1}, Ω_q:=Ω_q^∞ ={q^-k:k∈ℕ_0}. We will use <cit.> n+k2= n2+k2+kn, n-k2 =n2+k2+k(1-n). We also require the q-shifted factorial (a;q)_n=(1-a)(1-qa)⋯(1-q^n-1a), n∈_0. One may also define (a;q)_∞:=∏_n=0^∞ (1-aq^n), where |q|<1. Furthermore, define (a;q)_b:=(a;q)_∞/(a q^b;q)_∞. where a q^b∉Ω_q. We will also use the common notational product conventions (a_1,...,a_k)_b:= (a_1)_b⋯(a_k)_b, (a_1,...,a_k;q)_b:= (a_1;q)_b⋯(a_k;q)_b, where b∈ℂ∪{∞}. The q-shifted factorial also has the following useful properties <cit.>: (a;q^-1)_n=q^-n2(-a)^n(a^-1;q)_n, (a;q)_n+k=(a;q)_k(aq^k;q)_n = (a;q)_n(aq^n;q)_k, (a;q)_n =(q^1-n/a;q)_n(-a)^nq^n2, (a;q)_n-k/(b;q)_n-k= (b/a)^k (a;q)_n(q^1-n/b;q)_k/(b;q)_n(q^1-n/a;q)_k, a,b 0, k=0,1,2,…,n, (q^-n-k;q)_k=(q;q)_k(q^1+k;q)_n/(q;q)_n (-1)^k q^k2-k^2-nk. We note that an equivalent representation of (<ref>), which is very useful for obtaining limits which we often need, is a^n(x/a;q)_n= q^n2(-x)^n(a/x;q^-1)_n, therefore lim_a→0 a^n(x/a;q)_n = lim_b→∞ 1/b^n(xb;q)_n = q^n2(-x)^n. From (<ref>), another useful limit representation is lim_λ→∞(aλ;q)_n/(bλ;q)_n =(a/b)^n. Furthermore, one has the following identities <cit.> (a^2;q)_∞=(± a,± q^1/2 a;q)_∞, (a;q^1/2)_∞=(a,q^1/2 a;q)_∞. §.§ Basic hypergeometric series The basic hypergeometric series, which we will often use, is defined for q,z∈ such that |q|,|z|<1, s,r∈ℕ_0, b_j∉Ω_q, j=1,...,s, as <cit.> rsa_1,...,a_rb_1,...,b_sq,z :=∑_k=0^∞(a_1,...,a_r;q)_k/(q,b_1,...,b_s;q)_k((-1)^kq^ k2)^1+s-r z^k. For s+1>r, _rϕ_s is an entire function of z, for s+1=r then _rϕ_s is convergent for |z|<1, and for s+1<r the series is divergent unless it is terminating. Note that when we refer to a basic hypergeometric function with arbitrary argumentz, we mean that the argument does not necessarily depend on the other parameters, namely the a_j's, b_j's nor q. However, for the arbitrary argument z, it very-well may be that the domain of the argument is restricted, such as for |z|<1. We refer to a basic hypergeometric series as ℓ-balanced if q^ℓ a_1⋯ a_r=b_1⋯ b_s, and balanced if ℓ=1. A basic hypergeometric series _r+1ϕ_r is well-poised if the parameters satisfy the relations qa_1=b_1a_2=b_2a_3=⋯=b_ra_r+1. It is very-well poised if in addition, {a_2,a_3}=± qa_1^1/2. Terminating basic hypergeometric series which appear in the definitions of basic hypergeometric orthogonal polynomials are defined as rsq^-n,a_1,...,a_r-1b_1,...,b_sq,z:=∑_k=0^n (q^-n,a_1,...,a_r-1;q)_k/(q,b_1,...,b_s;q)_k((-1)^kq^ k2)^1+s-rz^k, where b_j∉Ω_q^n, j=1,...,s. In the sequel, we will use the following notation _r+1ϕ_s^m, m∈ℤ (originally due to van de Bult & Rains <cit.>), for basic hypergeometric series with zero parameter entries. Consider p∈ℕ_0. Then define _r+1ϕ_s^-p([ a_1,…,a_r+1; b_1,…,b_s ];q,z ) := r+p+1sa_1,a_2,…,a_r+1,0,…,0^p b_1,b_2,…,b_sz, _r+1ϕ_s^ p([ a_1,…,a_r+1; b_1,…,b_s ];q,z ) := r+1s+pa_1,a_2,…,a_r+1 b_1,b_2,…,b_s,0,…,0_pz, where b_1,…,b_s∉Ω_q∪{0}, and _r+1ϕ_s^0:=_r+1ϕ_s. The nonterminating basic hypergeometric series _r+1ϕ_s^m( a; b;q,z), a:={a_1,…,a_r+1}, b:={b_1,…,b_s}, is well-defined for s-r+m≥ 0. In particular _r+1ϕ_s^m is an entire function of z for s-r+m>0, convergent for |z|<1 for s-r+m=0 and divergent if s-r+m<0. Note that we will move interchangeably between the van de Bult & Rains notation and the alternative notation with vanishing numerator and denominator parameters which are used on the right-hand sides of (<ref>) and (<ref>). We will often use (frequently without mentioning) the following limit transition formulas which can be found in <cit.> lim_λ→∞rsa_1,...,a_r-1,λ a_rb_1,...,b_sq,z/λ =r-1sa_1,...,a_r-1b_1,...,b_sq,a_rz, lim_λ→∞rsa_1,...,a_rb_1,...,b_s-1, λ b_sq,λ z=rs-1a_1,...,a_rb_1,...,b_s-1q,z/b_s, lim_λ→∞rsa_1,...,a_r-1,λ a_rb_1,...,b_s-1,λ b_sq,z =r-1s-1a_1,...,a_r-1b_1,...,b_s-1q,a_r/b_sz. The q-binomial theorem is <cit.> 10a-q,z=(az;q)_∞/(z;q)_∞, q∈, |z|<1. Also, one has two q-analogues of the exponential function which are due to Euler <cit.> (see also <cit.>). Let q∈, z∈. Then e_q(z):=00-1--q,z=1/(z;q)_∞, |z|<1, E_q(-z)=00--q,z=(z;q)_∞. See proof of <cit.>. One has the following relation between _2ϕ_2 and _2ϕ_1 cf. <cit.> 22a,bc,abz/cq,z= (bz/c;q)_∞/(abz/c;q)_∞21a,c/bcq,bz/c. One also has the following nonterminating transformations <cit.> 101a-q,z =(a,z;q)_∞01-2-zq,a =(z;q)_∞01-zq,az. In <cit.>, one finds the inversion formula for terminating basic hypergeometric series. Let m, n, k, r, s∈ℕ_0, 0≤ k≤ r, 0≤ m≤ s, a_k, b_m∉Ω^n_q∪{0}, q∈ℂ^∗ such that |q| 1. Then, r+1sq^-n,a_1,...,a_rb_1,...,b_s q,z=(a_1,...,a_r;q)_n/(b_1,...,b_s;q)_n(z/q)^n((-1)^nq^n2)^s-r-1 ×∑_k=0^n (q^-n,q^1-n/b_1,...,q^1-n/ b_s;q)_k/(q,q^1-n/a_1,...,q^1-n/a_r;q)_k(b_1⋯ b_s/a_1⋯ a_rq^n+1/z)^k. From the above inversion formula (<ref>), one may derive the following useful terminating basic hypergeometric transformation lemma. Let p∈, n,r,s∈ℕ_0, a_k, b_m∉Ω^n_q∪{0}, z, q∈ℂ^∗ such that |q| 1. Then r+1spq^-n,a_1,...,a_rb_1,...,b_sq,z=(a_1,...,a_r; q)_n/(b_1,...,b_s;q)_n(z/q)^n ((-1)^nq^n2)^s-r+p-1 ×s+1rs-r+pq^-n,q^1-n/b_1,...,q^1-n/b_sq^1-n/a_1,...,q^1-n/a_rq,b_1⋯ b_s/a_1⋯ a_rq^(1-p)n+p+1/z. In a straightforward calculation, if we write (<ref>) and we apply (<ref>) assuming all the parameters are nonzero, and then we apply identities (<ref>) and (<ref>) one obtains (<ref>). This completes the proof. Let n,r∈ℕ_0, q∈ℂ^∗ such that |q| 1, and for 0≤ k≤ r, let a_k, b_k∉Ω^n_q∪{0}. Then, r+1rq^-n,a_1,…,a_rb_1,…,b_rq,z = q^-n2 (-1)^n (a_1,…,a_r;q)_n/(b_1,…,b_r;q)_n (z/q)^nr+1rq^-n, q^1-n/b_1,…, q^1-n/b_rq^1-n/a_1,…, q^1-n/a_rq, q^n+1/zb_1⋯ b_r/a_1⋯ a_r. Take r=s, p=0 in (<ref>), which completes the proof. Note that in Corollary <ref> if the terminating basic hypergeometric series on the left-hand side is balanced then the argument of the terminating basic hypergeometric series on the right-hand side is q^2/z. Another equality we can use is the following connecting relation between terminating basic hypergeometric series with base q, and with base q^-1: r+1rq^-n,a_1,...,a_rb_1,...,b_rq,z= r+1rq^n, a^-1_1,..., a^-1_rb^-1_1, ..., b^-1_rq^-1, a_1 a_2⋯ a_rb_1 b_2⋯ b_rzq^n+1 = q^-n2(-z/q)^n (a_1,…,a_r;q)_n/(b_1,…,b_r;q)_nr+1rq^-n, q^1-n/b_1,..., q^1-n/b_rq^1-n/a_1,..., q^1-n/a_rq,b_1⋯ b_r/a_1⋯ a_rq^n+1/z. In order to understand the procedure for obtaining the q-inverse analogues of the basic hypergeo­metric orthogonal polynomials studied in this manuscript, let us consider a special case in detail. Let n∈_0, f_n,r(q):=f_n,r(q;z(q); a(q), b(q)):=g_r(q) r+1rq^-n, a(q) b(q)q,z(q), where . [ a(q):={a_1(q),…,a_r(q)}; b(q):={b_1(q),…,b_r(q)} ]}, which will suffice, for instance, for the study of the terminating basic hypergeometric representations for the Askey-Wilson polynomials. In order to obtain the corresponding q-inverse hypergeometric representations of f_n,r(q), one only needs to consider the corresponding q-inverted function: f_n,r(q^-1)=g_r(q^-1) r+1rq^n, a(q^-1) b(q^-1)q^-1,z(q^-1). Let r,k∈ℕ_0, 0≤ k≤ r, a_k(q)∈ℂ, b_k(q)∈Ω_q, q∈ℂ^∗ such that |q| 1, z(q)∈ℂ. Define a(q):=(a_1(q),…,a_r(q)), b(q):=(b_1(q),…,b_r(q)) and a multiplier function g_r(q):=g_r(q;z(q); a(q); b(q)) which is not of basic hypergeometric type (some multiplicative combination of powers and q-Pochhammer symbols), and z(q):=z(q; a(q); b(q)). Then defining f_n,r(q) as in (<ref>), one has f_n,r(q^-1) =g_r(q^-1)r+1rq^-n, a^-1(q^-1) b^-1(q^-1)q,q^n+1a_1(q^-1)⋯ a_r(q^-1) z(q^-1)/b_1(q^-1)⋯ b_r(q^-1). By using (<ref>) repeatedly with the definition (<ref>) in (<ref>), one obtains the q-inverted terminating representation (<ref>), which corresponds to the original terminating basic hypergeo­me­tric representation (<ref>). This completes the proof. Now consider the more general case. Let r,s∈ℕ_0, 0≤ t≤ r, 0≤ u≤ s, and let . [ a(q):={a_1(q),…,a_r-t(q),0,…,0^t}; b(q):={b_1(q),…,b_s-u(q),0,…,0_u} ]}, where either t>0, u=0, or u>0, t=0, or t=u=0, and as above, a multiplier function g_r,s,t,u(q):=g_r,s,t,u(q;z(q); a(q); b(q)) and z(q):=z(q; a(q); b(q)). Define f_r,s,t,u(q):= g_r,s,t,u(q) r+1sq^-n, a(q) b(q)q,z(q). In order to obtain the q-inverted representation of f_r,s,t,u, one must again compute f_r,s,t,u(q^-1)=g_r,s,t,u(q^-1) r+1sq^n, a(q^-1) b(q^-1)q^-1,z(q^-1). This can be obtained by repeated use of (<ref>) using the definition (<ref>) and various combinations of (<ref>)–(<ref>). § CONTINUOUS BASIC HYPERGEOMETRIC ORTHOGONAL POLYNOMIALS We will study a subset of basic hypergeometric orthogonal polynomials in the q-Askey scheme which we refer to as continuous basic hypergeometric orthogonal polynomials. These are basic hypergeometric orthogonal polynomials whose orthogonality relation is given by an integral over an interval on the real line. In the remainder of the paper we will be examining orthogonal polynomials in x=1/2(z+z^-1). Note that in this case, x=x(z) is invariant under the map z↦ z^-1, so all functions (including polynomials) in x will also satisfy this invariance. §.§ The continuous q and q-inverse symmetric families The Askey–Wilson polynomials are at the top of the symmetric family of basic hypergeometric orthogonal polynomials. The continuous dual q-Hahn p_n(x;a,b,c|q), Al-Salam–Chihara p_n(x;a,b|q), continuous big q-Hermite H_n(x;a|q) and continuous q-Hermite H_n(x|q) polynomials are the d→ c→ b→ a→ 0 limit cases of the Askey–Wilson polynomials, namely p_n(x;a,b,c|q)=lim_d→ 0p_n(x;a,b,c,d|q), p_n(x;a,b|q)=lim_c→ 0p_n(x;a,b,c|q), H_n(x;a|q)=lim_b→ 0p_n(x;a,b|q), H_n(x|q)=lim_a→ 0H_n(x;a|q). The continuous dual q-Hahn and Al-Salam–Chihara polynomials are symmetric in the variables a,b,c, and a,b respectively. By starting with representations of the Askey–Wilson polynomials (<ref>), we can obtain terminating basic hypergeometric series representations of the symmetric family. Furthermore, the q-inverse symmetric family are also a set of symmetric polynomials in their parameters a,b,c. These polynomials can also be obtained as c→ b→ a→ 0 limit cases p_n(x;a,b|q^-1)=lim_c→ 0p_n(x;a,b,c|q^-1), H_n(x;a|q^-1)=lim_b→ 0p_n(x;a,b|q^-1), H_n(x|q^-1)=lim_a→ 0H_n(x;a|q^-1). §.§ The Askey–Wilson polynomials The Askey–Wilson polynomials have the following terminating _4ϕ_3 basic hypergeometric series representations <cit.>. Let n∈_0, q∈, x=1/2(z+z^-1), z∈, a,b,c,d∈. Then p_n(x;a,b,c,d|q) := a^-n (ab,ac,ad;q)_n 43q^-n,q^n-1abcd, az^±ab,ac,adq,q =q^-n2 (-a)^-n(abcd/q;q)_2n (a z^±;q)_n/(abcd/q;q)_n43q^-n, q^1-n/ab, q^1-n/ac, q^1-n/ad q^2-2n/abcd,q^1-n/az^±q,q =z^n(ab,cz^-1,dz^-1;q)_n 43q^-n,az,bz,q^1-n/cdab,q^1-n/cz,q^1-n/dzq,q. See proof of <cit.>. The q-inverse Askey-Wilson polynomials p_n(x;a,b,c,d|q^-1) are given by p_n(x;a,b,c,d|q^-1) =q^-3n2(-abcd)^np_n(x;1a,1b,1c,1d|q), which follows from Theorem <ref>, Proposition <ref>, and Remark <ref>. §.§ The continuous dual q-Hahn and dual q-inverse Hahn polynomials The continuous dual q-Hahn polynomials are symmetric in three parameters a,b,c. One has the following basic hypergeometric representations of the continuous dual q-Hahn polynomials Let n∈ℕ_0, x=1/2(z+z^-1), z∈, q∈, a,b,c∈. Then, the continuous dual q-Hahn polynomials can be given by: p_n(x;a,b,c|q) := a^-n (ab,ac;q)_n 32q^-n, az^±ab,acq,q =q^-n2(-a)^-n (az^±;q)_n32q^-n, q^1-n/ab, q^1-n/ac q^1-n/az^±q,q^n bc =z^n (ab,cz^-1;q)_n32q^-n, az, bzab,q^1-n/czq,q/cz =z^n (az^-1,bz^-1;q)_n 32q^-n,cz,q^1-n/abq^1-n/az,q^1-n/bzq,q. The representation (<ref>) is derived by starting with (<ref>) and replacing b, c, or d→ 0 (see also <cit.>); (<ref>) is derived using (<ref>) and taking, for instance d→ 0; (<ref>) is derived by using (<ref>) and taking d→ 0; (<ref>) is derived by using (<ref>) and taking b→ 0 and replacing d→ b. This completes the proof. The continuous dual q-inverse Hahn polynomials can be obtained from the Askey–Wilson polynomials as follows p_n(x;a,b,c|q^-1)=q^-3n2(-abc)^n lim_d→0d^n p_n(x;1a,1b, 1c,1d|q). Now we give the basic hypergeometric representations of the continuous dual q-inverse Hahn polynomials. Let p_n(x;a,b,c|q) and all the respective parameters be defined as previously. Then, the continuous dual q-inverse Hahn polynomials are given by: p_n(x;a,b,c|q^-1)= q^-2n2 (abc)^n ( 1/ab,1/ac ;q)_n32q^-n,z^±/a1/ab,1/ac q,q^n/bc =q^-n2(-a)^n(z^±/a;q)_n 32q^-n, q^1-nab, q^1-nac q^1-naz^±q,q =q^-2n2 (abc)^n(1/ab,z/c;q)_n 32q^-n, 1/az, 1/bzq^1-nc/z, 1/abq,q =q^-2n2(ab/z)^n (z/a,z/b;q)_n32q^-n,1/cz,q^1-nabq^1-na/z,q^1-nb/zq,qc/z. Each inverse representation is derived from the corresponding representation by apply­ing the map q↦ 1/q and using (<ref>). §.§ The Al-Salam–Chihara and q-inverse Al-Salam–Chihara polynomials Let n∈ℕ_0, x=1/2(z+z^-1), z∈, q∈, a,b∈. Then, the Al-Salam-Chihara polynomials are given by: p_n(x;a,b|q):=a^-n(ab;q)_n 311q^-n, az^±abq,q = q^-n2 (-a)^-n (az^±;q)_n22q^-n,q^1-n/abq^1-n/az^±q,qb/a = z^n (ab;q)_n31q^-n, az,bzabq,q^n/z^2 =z^n(a z^-1;q)_n 21q^-n,bzq^1-n/azq,q/az =z^n( az^-1,bz^-1 ;q)_n 22-1q^-n,q^1-n/abq^1-n/az, q^1-n/bz q,q. The representation (<ref>) is derived by taking (<ref>) and replacing c↦ 0 (see also <cit.>); (<ref>) is derived by taking (<ref>) and replacing c↦ 0; (<ref>) is derived by taking (<ref>) and replacing c↦ 0; (<ref>) is derived by taking (<ref>) replacing b↦ 0 (see also <cit.>) and interchanging c and a; (<ref>) is derived by taking (<ref>) and replacing c↦ 0, Using the Al-Salam-Chihara polynomial representations, we can compute their q-inverse analogs. Let p_n(x;a,b|q) and the respective parameters be defined as previously. Then, the q-inverse Al-Salam-Chihara polynomials are given by: p_n(x;a,b|q^-1)= q^-n2 (-b)^n (1/ab ;q)_n31q^-n, z^±/a1/abq,q^na/b =q^-n2(-a)^n(z^±/a;q)_n 22-1q^-n,q^1-nabq^1-naz^±q,q =q^-n2(-ab z)^n(1/ab;q)_n 311q^-n, 1/az, 1/bz 1/abq,q = q^-()0ptn2 (-a)^n (1/az;q)_n21q^-n, z/bq^1-nazq, qbz = q^-2n2(abz)^n ( 1/az, 1/bz ;q)_n 22q^-n, q^1-nab q^1-naz,q^1-nbz q,qz^2. Each inverse representation is derived from the corresponding representation by apply­ing the map q↦ 1/q and using (<ref>). §.§ The continuous big q-Hermite and big q-inverse Hermite polynomials Let n∈ℕ_0, q∈, a∈, x=1/2(z+z^-1), z∈. The continuous big q-Hermite polynomials are given by: H_n(x;a|q) :=a^-n302q^-n, az^±-q,q = q^-n2 (-a)^-n (az^±;q)_n 12q^-nq^1-nz^±/aq,q^2-n/a^2 =z^n(az^-1;q)_n 11-1q^-nq^1-n/azq,q/az =z^n (az^-1;q)_n/(q/az;q)_∞11qz/aq^1-n/azq,q^1-n/az =z^n20q^-n, az -q,q^n/z^2. The representation (<ref>) is derived by taking (<ref>) and replacing a_2↦ 0 (see also <cit.>); (<ref>) is derived by taking (<ref>) and replacing a_2↦ 0; (<ref>) is derived by taking eqrefASC:def3 and replacing a_2↦ 0; (<ref>) is derived from (<ref>) using <cit.>; (<ref>) is derived by taking (<ref>) or (<ref>) and replacing b↦ 0 (see also <cit.>). Using the continuous big q-Hermite polynomials, we can compute their q-inverse representations. Let H_n(x;a|q) and the respective parameters be defined as previously. Then, the continuous big q-inverse Hermite polynomials are given by: H_n(x;a|q^-1) =a^-n30q^-n,z^±/a-q,q^na^2 =q^-n2(-a)^n(z^±/a;q)_n12-2q^-nq^1-naz^±q,q = q^-()0ptn2(-a)^n (1/az;q)_n11q^-nq^1-nazq,qz^2 =z^n201q^-n,1/az-q,qa/z. Each inverse representation is derived from the corresponding representation by applying the map q↦ 1/q and using (<ref>). §.§ The continuous q-Hermite and q-inverse Hermite polynomials Let n∈_0, q∈, x=1/2(z+z^-1), z∈. Then, one has the following terminating basic hypergeometric representation for the continuous q-Hermite polynomials: H_n(x|q):= z^n 10-1q^-n-q,q^n/z^2. Start with (<ref>) and take the limit as d→ c→ b→ a→ 0 sequentially. Similarly, we can compute the basic hypergeometric representation of the continuous q-inverse Hermite polynomials. Let H_n(x|q) and the respective parameters be defined as previously. The continuous q-inverse Hermite polynomials are given by: H_n(x|q^-1) =z^n101q^-n-q,q/z^2 =z^n(qz^2;q)_∞01-q/z^2q,q^1-n/z^2. The inverse representation (<ref>) is derived from (<ref>) by applying the map q↦ 1/q and using (<ref>). The representation (<ref>) follows from (<ref>) by applying the transformation (<ref>). Note that there exist the connection relations between the continuous q-Hermite polynomials and the continuous q-inverse Hermite polynomials <cit.> H_n(x|q)=(q;q)_n∑_k=0^⌊n/2⌋(-1)^kq^1/2 k(3k-2n-1)/(q;q)_k(q;q)_n-2kH_n-2k(x|q^-1), H_n(x|q^-1)=(q;q)_n∑_k=0^⌊n/2⌋q^-k(n-k)/(q;q)_k(q;q)_n-2kH_n-2k(x|q). § Q-CHAUNDY NONTERMINATING DOUBLE PRODUCT REPRESENTATIONS We derive two equivalent q-Chaundy infinite series representations for a product of two nonterminating basic hypergeometric series. These representations are given by sums over terminating basic hypergeometric series using the van de Bult & Rains notation (<ref>), (<ref>). Let r,s∈_0∪{-1}, u,v∈_0, p,ℓ∈ such that p≥ r-u and ℓ≥ s-v, a∈^r+1, b∈^u, c∈^s+1, d∈^v, q∈. Then r+1up a bq, Xs+1vℓ c dq, Y =∑_n=0^∞( a;q)_n X^n/(q, b;q)_n((-1)^nq^n2)^u-r+p ×s+u+2r+v+1u-r+p+ℓq^-n, c, q^1-n/ b d, q^1-n/ aq, q^1+p(1-n)b_1⋯ b_u Y/a_1⋯ a_r+1 X =∑_n=0^∞( c;q)_n Y^n/(q, d;q)_n((-1)^nq^n2)^v-s+ℓ ×r+v+2s+u+1v-s+p+ℓq^-n, a, q^1-n/ d b, q^1-n/ c q, q^1+ℓ(1-n)d_1⋯ d_v X/c_1⋯ c_s+1 Y, where X, Y are given such that the left-hand side is well-defined. First consider the restriction p,ℓ∈ such that p≥ r-u and ℓ≥ s-v so that both nonterminating basic hypergeometric series are convergent. Then starting with the left-hand side of (<ref>) one writes out the double product of two nonterminating basic hypergeometric series as two sums multiplied together using (<ref>), (<ref>) with X↦ g X, Y↦ h X, for some h X,g X∈, namely r+1up a bq,g Xs+1vℓ c dq,h X =∑_n=0^∞( a;q)_n/(q, b;q)_n((-1)^nq^n2)^u-r+p∑_k=0^∞( c;q)_k/(q, d;q)_k((-1)^kq^k2)^v-s+ℓ (g X)^n(h X)^k. Now make a double-index replacement (n',k')=(n+k,k) or equivalently (n,k)=(n'-k',k'). This is referred to as diagonal summation (see <cit.>), and upon replacement n'↦ n, and k'↦ k, we have r+1up a bq,g Xs+1vℓ c dq,h X =∑_n=0^∞(g X)^n ∑_k=0^n( c;q)_k/(q, d;q)_k((-1)^kq^k2)^v-s+ℓ( a;q)_n-k/(q, b;q)_n-k((-1)^n-kq^n-k2)^u-r+p(h/g)^k. Now we use (<ref>), (<ref>), (<ref>), collecting terms using (<ref>), (<ref>), and replacing g X↦ X, h X↦ Y produces (<ref>). Without loss of generality interchanging the two basic hypergeometric series on the left-hand side of (<ref>) produces (<ref>). This completes the proof. The product representations (<ref>), (<ref>) are clearly, term by term, inverses of each other with regard to Lemma <ref>. § APPLICATIONS TO GENERATING FUNCTIONS In this section we will treat generating functions of orthogonal polynomials in the q-Askey scheme, and also, in the q-inverse Askey scheme. A generating function for a basic hypergeometric orthogonal polynomial p_n(x; a|q), where a is a multiset of parameters with base q, is given by f(x,t; a|q)=∑_n=0^∞ t^n h_n( a|q) p_n(x; a|q), where h_n is a coefficient defined such that the infinite series is convergent. Unless otherwise stated, we assume throughout the manuscript that |t|<1. Sometimes other conditions on the parameters might be required in order for the expressions to be well-defined, and also, in some cases the generating functions might be entire functions of t. §.§ Askey–Wilson polynomials The above formulas are quite general. Nonetheless, they can be used to prove some classical generating functions for basic hypergeometric orthogonal polynomials in the q-Askey scheme. The Askey–Wilson polynomials <cit.> are the basic hypergeometric orthogonal polynomials which are at the top of the q-Askey scheme and are symmetric in four parameters a,b,c,d∈. Let q∈, x,a,b,c,d∈, x=1/2(z+z^-1), z∈, t∈ such that |tz^±|<1. Then 21az,bzabq,tz^-121cz^-1,dz^-1cdtz=∑_n=0^∞p_n(x; a|q) t^n/(q,ab,cd;q)_n. Starting with (<ref>) using p=ℓ=0, r=s=u=v=1, a={az,bz}, b={ab}, c={cz^-1,dz^-1}, d={cd}, X=tz^-1, Y=tz, the terminating basic hypergeometric series reduces to an Askey–Wilson polynomial through (<ref>). After simplification, the result follows. As mentioned in <ref>, one can specifically start with (<ref>) and take the sequential limit d→ c→ b→ a→ 0 symmetric subfamilies. Starting from (<ref>), we can also use these sequential limits to obtain the following generating functions. Alternatively, one can use the rep­re­sen­ta­tions in Corollary <ref> with Theorem <ref> to verify the following generating functions. We can also do the same thing with the q-inverse symmetric families. We now proceed in a systematic way to complete this task. §.§ The continuous dual q-Hahn Using the q-Chaundy result one can obtain generating functions for the continuous dual q-Hahn polynomials. Let q∈, a,b,c∈, t∈ℂ, |t|<|z^±|. Then, one has the following generating function for continuous dual q-Hahn polynomials, namely ∑_n=0^∞p_n(x;a,b,c|q) t^n/(q,ab;q)_n =(ct;q)_∞/(tz;q)_∞21az,bzabq,tz^-1. The generating function (<ref>) can be derived using Theorem <ref> with r=u=1, p=s=v=ℓ=0, a={az,bz}, b={ab}, c={cz^-1}, d=∅, X=tz^-1, Y=tz along with the representation of the continuous dual q-Hahn polynomials (<ref>). For the constraint note also Remark <ref>. We will not mention this again. This completes the proof. Note that the generating function (<ref>) can also be derived by using the representation for continuous dual q-Hahn polynomials (<ref>) with s=v=1, p=r=u=ℓ=0, a={cz^-1}, b=∅, c={az,bz}, d={ab}, X=tz, Y=tz^-1. This is because the representations (<ref>), (<ref>) are related by the inversion transformation. There is a similar equivalence for Theorem <ref> using the representation of continuous dual q-Hahn polynomials (<ref>). Let q∈, a,b,c∈, t∈ℂ, x=1/2(z+z^-1), z∈, |t|<|a|. Then ∑_n=0^∞t^n q^n2 p_n(x;a,b,c|q)/(q,ab,ac;q)_n=(-ta;q)_∞22-1az^±ab,acq,-t/a. This follows by setting u=2, r=1, v=l=0, s=-1, a={az^±}, b={ab,ac}, c= d=∅ along with the representation of the continuous dual q-Hahn polynomials (<ref>). Finally replacing t↦ -ta^-1 completes the proof. Another example can be generated by the non-standard generating function due to Atakishiyeva and Atakishiyev <cit.> P(x,t;a,b,c|q):=∑_n=0^∞t^n p_n(x;a,b,c|q)/(q,tabc;q)_n =(ta,tb,tc;q)_∞/(tabc,tz^±;q)_∞. The q-Chaundy theorem produces the alternative expansions of this non-standard generating function. Let q∈, x=1/2(z+z^-1), z∈, a, b, c, t∈, |t|<1. Then P(x,t;a,b,c|q)=∑_n=0^∞(ac,bc;q)_n (t/c)^n/(q,abct;q)_n43q^-n,zc^±,q^1-n/abcttz,q^1-n/ac,q^1-n/bcq,qt/z =∑_n=0^∞(zc^±;q)_n (t/z)^n/(q,tz;q)_n43q^-n,ac,bc,q^1-n/tzabct, q^1-n/zc^±q,qt/c. One can use the q-Chaundy Theorem <ref> with the product generating function (<ref>) and identify a={ac,bc}, b={abct}, c={zc^±}, d={tz}, r=s=u=v=1, ℓ=p=0, X=t/c, Y=t/z, which upon insertion completes the proof. §.§ The continuous dual q-inverse Hahn polynomials We can also use the q-Chaundy result to obtain generating functions for the continuous dual q-inverse Hahn polynomials. Let q∈, x=1/2(z+z^-1), z∈, a,b,c,t∈, |t|<1. Then ∑_n=0^∞t^n q^2n2p_n(x;a,b,c|q^-1)/(q,1/ab,1/ac;q)_n =1/(abct;q)_∞22z^±/a1/ab,1/acq,at. Start with (<ref>) and identify c={z^±/a}, d={1/ab,1/ac}, a= b=∅, v=2, s=1, u=ℓ=0, r=p=-1, X=bct, Y=t in Theorem <ref> with (<ref>). Finally, replacing t↦ at, completes the proof. Starting with the representation of the continuous dual q-inverse polynomials (<ref>) combined with Theorem <ref> produces Theorem <ref>. The following generating function has been previously discovered by Ismail, Zhang and Zhou in <cit.>. However, we are able to prove it alternatively using the q-Chaundy Theorem <ref> as follows. Let q∈, x=1/2(z+z^-1), z∈, a,b,c,t∈, |t|<1. Then ∑_n=0^∞t^n q^2n2 p_n(x;a,b,c|q^-1)/(q,1/ab;q)_n =(bt;q)_∞/(abct;q)_∞22z^±/a1/ab,btq,at =(tab/z;q)_∞/(abct;q)_∞21z/a,z/b1/abq,tab/z, where |tab|<|z^±| in the second representation. Start with (<ref>) and identify a={z/a,z/b}, b={1/ab}, c={1/cz}, d=∅, u=r=1, s=v=p=ℓ=0, X=t, Y=czt in Theorem <ref> with (<ref>). Finally replacing t↦ tab/z completes the proof. Starting with the representation of the continuous dual q-inverse polynomials (<ref>) combined with Theorem <ref> produces a generating function which is equivalent to Theorem <ref>. Now we present the following _3ϕ_3 product generating function for continuous dual q-inverse Hahn polynomials. Let q∈, x=1/2(z+z^-1), z∈, γ,a,b,c,t∈, |t|<1. Then G_γ(x,t;a,b,c|q):= ∑_n=0^∞t^n (γ;q)_n q^2n2 p_n(x;a,b,c|q^-1)/(q,1/ab,1/ac;q)_n =(γ abct;q)_∞/(abct;q)_∞33γ,z^±/a1/ab,1/ac,γ abctq,at. Start with the definition of G_γ in (<ref>) and insert the representation of the continuous dual q-inverse Hahn polynomials (<ref>), which then is a double sum over n,k. Then reverse the order of summation and shift the n index n↦ n+k. This converts the outer sum to the form of a q-binomial and the result follows. Using the q-Chaundy product representations we can obtain the following double sum representations of the _3ϕ_3 in G_γ(x,t;a,b,c|q). Let q∈, x=1/2(z+z^-1), z∈, γ,a,b,c,t∈, |t|<1. Then G_γ(x,t;a,b,c|q) =∑_n=0^∞(γ;q)_n (abct)^n/(q;q)_n44q^-n,γ,z^±/a1/ab,1/ac,γ abct,q^1-n/γq,q/γ bc =∑_n=0^∞(γ,z^±/a;q)_n (-at)^n q^n2/(q,1/ab,1/ac,γ abct;q)_n531q^-n,γ,q^1-nab,q^1-nac,q^1-n/γ abctq^1-n/γ,q^1-naz^±q,qabct. Applying the q-Chaundy product representation Theorem <ref> and in particular (<ref>) and (<ref>) respectively produces the double sum representations of the _3ϕ_3 in Theorem <ref>. If one takes the limit γ→ 0 then the representations of the generating function G_γ produces the Theorem <ref>. Taking the limit as γ→ 0 in Corollary <ref> produces representations of G_0 using (<ref>), (<ref>) respectively. Replacing γ=1/ac in (<ref>) and using (<ref>) produces Theorem <ref>. Let q∈, x=1/2(z+z^-1), z∈, a,b,c,t∈, |t|<1. Then ∑_n=0^∞t^n q^2n2 p_n(x;a,b,c|q^-1)/(1/ab,1/ac;q)_n=(q abct;q)_∞/(abct;q)_∞33q,z^±/a1/ab,1/ac,q abctq,at. Setting γ=q in Theorem <ref> completes the proof. §.§ The Al-Salam–Chihara polynomials The Al-Salam–Chihara polynomials have three standard (well-known) generating functions <cit.> which all follow easily using the q-Chaundy Theorem <ref>. Let q∈, x=1/2(z+z^-1), z∈, a,b,t∈, |t|<1. Then ∑_n=0^∞t^n p_n(x;a,b|q)/(q;q)_n=(at,bt;q)_∞/(tz^±;q)_∞, ∑_n=0^∞t^n p_n(x;a,b|q)/(q,ab;q)_n=1/(tz;q)_∞21az,bzabq,t/z, ∑_n=0^∞t^n q^n2 p_n(x;a,b|q)/(q,ab;q)_n=(-ta;q)_∞21az^±abq,-t/a, where |t|<|z^±|, |t|<|a|, in the second and third generating functions respectfully, so that the nonterminating Gauss basic hypergeometric series are convergent. The generating function (<ref>) follows from the representation (<ref>) using the q-Chaundy Theorem <ref> with r=s=u=v=p=ℓ=0, a={az^-1}, c={bz}, b= d=∅, X=zt, Y=tz^-1, and the q-binomial theorem twice. The generating function (<ref>) follows from the representation (<ref>) (or (<ref>)) with r=p=-1, u=ℓ=0, s=v=1, c={az,bz}, d={ab}, a= b=∅, X=zt, Y=tz^-1, and the application of Euler's Theorem <ref> once. The generating function (<ref>) follows from the representation (<ref>) (or (<ref>)) with r=-1, u=p=ℓ=0, s=v=1, c={az^±}, d={ab}, a= b=∅, X= Y=t, and the application of Euler's Theorem <ref> once. There's another generating function for Al-Salam–Chihara polynomials <cit.> L_γ(x,t;a,b|q):=∑_n=0^∞t^n (γ;q)_n p_n(x;a,b|q)/(q,ab;q)_n=(γ tz;q)_∞/(tz;q)_∞32γ,az,bzab,γ tzq,t/z, where |t|<|z^±|. Let q∈, x=1/2(z+z^-1), z∈, a,b,t∈, |t|<1. Then L_γ(x,t;a,b|q)=∑_n=0^∞(γ;q)_n (tz)^n/(q;q)_n43q^-n,γ,az,bzab,γ tz,q^1-n/γq,q/γ z^2 =∑_n=0^∞(γ,az,bz;q)_n (t/z)^n/(q,ab,γ tz;q)_n43q^-n,γ,q^1-n/ab,q^1-n/γ tzq^1-n/γ,q^1-n/az,q^1-n/bzq,qtz. Starting with the generating function (<ref>), and applying both expansions of the q-Chaundy Theorem <ref> using r=u=p=ℓ=0, s=v=2, X=tz, Y=t/z, a={γ}, c={γ,az,bz}, d={ab,γ tz}, b=∅, completes the proof. §.§ The q-inverse Al-Salam–Chihara polynomials One has the following generating function for q-inverse Al-Salam–Chihara polynomials which come from the representations (<ref>)-(<ref>). Let q∈, x=1/2(z+z^-1), z∈, a,b,t∈, |tab|<|z^±|. Then ∑_n=0^∞t^n q^2n2p_n(x;a,b|q^-1)/(q,1/ab;q)_n =(tab/z;q)_∞21z/a,z/b1/abq,tab/z. This generating function can be obtained by starting with the q-Chaundy Theorem <ref> with representation (<ref>) (or (<ref>)), s=-1, v=p=ℓ=0, u=r=1, a={1/az,1/bz}, b={1/ab}, c= d=∅, X= Y=t, replacing t↦ tabz and then z↦ z^-1. Similarly, one can take (<ref>) and take the limit as c→ 0. This completes the proof. Similarly, from (<ref>), we obtain the following infinite product generating function which was originally obtained in <cit.> (see also <cit.>, <cit.>). Let q∈, x=1/2(z+z^-1), z∈, a,b,t∈, |t|<1. Then ∑_n=0^∞t^n q^n2 p_n(x;a,b|q^-1)/(q;q)_n =(-tz^±;q)_∞/(-ta,-tb;q)_∞. This generating function can be obtained by starting with the q-Chaundy Theorem <ref> with representation (<ref>) and s=-1, r=u=s=v=p=ℓ=0, a={1/az}, c={z/b}, b= d=∅, X=at, Y=bt, and replacing t↦ -t. This completes the proof. By starting with (<ref>) (or (<ref>)) and the q-Chaundy Theorem <ref>, we can obtain another generating function. Let q∈, x=1/2(z+z^-1), z∈, a,b,t∈, |ta|<1. Then ∑_n=0^∞t^n q^n2 p_n(x;a,b|q^-1)/(q,1/ab;q)_n= 1/(-bt;q)_∞21z^±/a1/abq,-ta. This generating function can be obtained by starting with the q-Chaundy Theorem <ref> with representation (<ref>) (or (<ref>)) and s=ℓ=-1, u=v=p=0, r=u=1, a={z^±/a}, b={1/ab}, c= d=∅, X=at, Y=bt, and replacing t↦ -t. This completes the proof. One also has the following interesting generating function for q-inverse Al-Salam–Chihara polynomials with arbitrary parameter γ. Let q∈, x=1/2(z+z^-1), z∈, γ, a,b,t∈, |at|<1. Then H_γ(x,t;a,b|q) :=∑_n=0^∞t^n(γ;q)_n q^n2p_n(x;a,b|q^-1)/(q,1/ab;q)_n=(-γ bt;q)_∞/(-bt;q)_∞32γ,z^±/a1/ab,-γ btq,-at. Start with the definition of H_γ in (<ref>) and insert the representation of the q-inverse Al-Salam–Chihara polynomials (<ref>), which then is a double sum over n,k, then reverse the order of summation and shift the n index n↦ n+k. This converts the outer sum to the form of a q-binomial and the result follows. Using the q-Chaundy product representations we can obtain the following double sum representations of the _3ϕ_2 in H_γ(x,t;a,b|q). Let q∈, x=1/2(z+z^-1), z∈, γ, a,b,t∈, |t|<1. Then H_γ(x,t;a,b|q) =∑_n=0^∞(γ;q)_n (-bt)^n/(q;q)_n43q^-n,γ,z^±/a1/ab,-γ bt,q^1-n/γq,qa/γ b =∑_n=0^∞(γ,z^±/a;q)_n (-at)^n/(q,1/ab,-γ bt;q)_n43q^-n,γ,q^1-nab,-q^1-n/γ btq^1-n/γ,q^1-naz^±q,-qbt. Applying the q-Chaundy product representation Theorem <ref> and, in particular, (<ref>) and (<ref>) respectively produces the double sum representations of the _3ϕ_2 in Theorem <ref>. Inserting γ=1/ab in Theorem <ref> produces Theorem <ref>. §.§ The continuous big q-Hermite polynomials The continuous big q-Hermite polynomials have three standard (well-known) generating functions <cit.> which all follow easily using the q-Chaundy Theorem <ref>. Let q∈, x=1/2(z+z^-1), z∈, a,t∈, |t|<1. Then ∑_n=0^∞t^n H_n(x;a|q)/(q;q)_n=(at;q)_∞/(tz^±;q)_∞, ∑_n=0^∞t^n q^n2 H_n(x;a|q)/(q;q)_n=(-ta;q)_∞201az^±-q,-t/a, where |t|<|a|, in the second generating function, so that the nonterminating Gauss basic hypergeometric series is convergent. One can use the q-Chaundy Theorem <ref> with the representations (<ref>)–(<ref>). For instance, the generating function (<ref>) follows with (<ref>), (<ref>), p=v=0, r=u=1, s=ℓ=-1, a={az}, b= c= d=∅, X=tz^-1, Y=tz along with the representation of the continuous big q-Hermite polynomials (<ref>). However, it is easier to just take the limit as b→ 0 in Theorem <ref>. Note that the limit as b→ 0 in both (<ref>), (<ref>) produce (<ref>). This completes the proof. Another product generating function for continuous q-Hermite polynomials is <cit.> M_γ(x,t;a,b|q):= ∑_n=0^∞t^n (γ;q)_n H_n(x;a|q)/(q;q)_n=(γ tz;q)_∞/(tz;q)_∞21γ,azγ tzq,t/z. One may use the q-Chaundy Theorem <ref> to produce alternative expansions of this generating function which we reproduce in the following theorem. Let q∈, x=1/2(z+z^-1), z∈, γ, a,t∈, |t|<1. Then M_γ(x,t;a|q)=∑_n=0^∞(γ;q)_n (tz)^n/(q;q)_n32q^-n,γ,azγ tz,q^1-n/γq,q/γ z^2, =∑_n=0^∞(γ,az;q)_n (t/z)^n/(q,γ tz;q)_n32q^-n,γ,q^1-n/γ tzq^1-n/γ,q^1-n/azq,qtz^2/a. One can use the q-Chaundy Theorem <ref> with the product generating function (<ref>). However, it is easier to take the limit as b→ 0 in Theorem <ref>. This completes the proof. §.§ The continuous big q-inverse Hermite polynomials From (<ref>), we can obtain the following generating function for continuous big q-inverse Hermite polynomials. Let q∈, x=1/2(z+z^-1), z∈, a,t∈, |t|<1. Then ∑_n=0^∞t^n q^n2H_n(x;a|q^-1)/(q;q)_n =(-tz^±;q)_∞/(-ta;q)_∞. If you replace t↦ t/a and take the limit as a→ 0 in (<ref>) and then replace b↦ a you obtain the following generating function. If one starts with Theorem <ref> and takes the limit b→ 0, one arrives at this result. Also, if one uses the q-Chaundy Theorem <ref> with representations (<ref>) or (<ref>), one arrives at the same generating function. Note that for the continuous big q-Hermite polynomials there exists a generating function with arbitrary numerator dependence given by (γ;q)_n, i.e., (<ref>). We have not, as of yet, been able to derive an analogous generating function for the continuous big q-inverse Hermite polynomials. §.§ The continuous q-Hermite polynomials The standard generating function for the continuous q-Hermite polynomials <cit.> can be easily obtained using the q-Chaundy Theorem <ref>. Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then ∑_n=0^∞t^n H_n(x|q)/(q;q)_n=1/(tz^±;q)_∞. The generating function (<ref>) follows easily from the representation (<ref>) using the q-Chaundy Theorem <ref> with r=s=-1, u=v=0, p=ℓ=-1, a= b= c= d=∅, X=zt, Y=tz^-1, and Euler's Theorem <ref> twice. Another generating function for continuous q-Hermite polynomials is given by <cit.> J(x,t|q):=∑_n=0^∞q^n2 t^n H_n(x|q)/(q;q)_n=(-tz;q)_∞01-1--tzq,-t/z. By applying the q-Chaundy Theorem <ref>, we can obtain the following results. Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then J(x,t|q)=∑_n=0^∞q^n2(tz)^n/(q;q)_n11q^-n-tzq,q/z^2, =∑_n=0^∞q^n2(t/z)^n/(q,-tz;q)_n201q^-n,-q^1-n/tz-q,-q^ntz^3. Starting with the generating function (<ref>), and applying both expansions of the q-Chaundy Theorem <ref> using r=s=ℓ=-1, u=p=0, v=1, X=-tz, Y=-t/z, d={-tz}, a= b= c=∅ completes the proof. A third generating function for continuous q-Hermite polynomials is given by <cit.> K(x,t|q):=∑_n=0^∞t^n (γ;q)_n H_n(x|q)/(q;q)_n=(γ tz;q)_∞/(tz;q)_∞11-1γγ tzq,t/z. Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then K(x,t|q)=∑_n=0^∞(γ;q)_n (tz)^n/(q;q)_n22-1q^-n,γγ tz,q^1-n/γq,q/γ z^2, =∑_n=0^∞(γ;q)_n (t/z)^n/(q,γ tz;q)_n31q^-n,γ,-q^1-n/γ tzq^1-n/γq,q^ntz^3. Starting with the generating function (<ref>), and applying both expansions of the q-Chaundy Theorem <ref> using ℓ=-1, r=s=u=p=0, v=1, X=tz, Y=t/z, a= c={γ}, d={γ tz}, b=∅, completes the proof. One also has the following interesting generating function due to Ismail for continuous q-Hermite polynomials cf. <cit.>. Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then O(x,t|q):=∑_n=0^∞t^n q^1/4n^2 H_n(x|q)/(q;q)_n=(-t;q^1/2)_∞21q^1/4z^±-q^1/2q^1/2,-t. One should see the proof of <cit.>. Using the q-Chaundy Theorem <ref>, one is able to derive alternate expressions for the generating function for continuous q-Hermite polynomials O(x,t|q). Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then O(x,t|q)=∑_n=0^∞q^1/2n2 t^n/(q^1/2;q^1/2)_n311q^-1/2n,q^1/4z^±-q^1/2q^1/2,q^1/2, =∑_n=0^∞(q^1/4z^±;q^1/2)_n (-t)^n/(± q^1/2;q^1/2)_n22± q^-1/2nq^1/4-1/2nz^±q^1/2,-q^1/2. Starting with the generating function (<ref>), and replacing q↦ q^2 converts the right-hand side to a form where the q-Chaundy Theorem <ref> can be used. Using r=-1, u=p=ℓ=0, s=v=1, X= Y=-t, c={q^1/2z^±}, d={-q}, a= b=∅, and then replacing q↦ q^1/2 completes the proof. One should observe the surprising fact that the alternate expressions for O(x,t|q) in Theorem <ref>, have the property that the terminating basic hypergeometric series are only a function of x, q and n. Comparing these expressions with the original generating function, it can be seen that these terminating basic hypergeometric series must represent alternative basic hypergeometric representations for the continuous q-Hermite polynomials! Remark <ref> leads us to the following important result. Let q∈, x=1/2(z+z^-1), z∈. Then H_n(x|q)=q^-1/4n (-q^1/2;q^1/2)_n311q^-1/2n,q^1/4z^±-q^1/2q^1/2,q^1/2 =(-1)^nq^-1/4n^2 (q^1/4z^±;q^1/2)_n22± q^-1/2nq^1/4-1/2nz^±q^1/2,-q^1/2. Comparing the the terms of the series of the alternate expressions for the generating function O(x,t|q) in Theorem <ref> completes the proof. This then leads us to the following quadratic transformations for terminating basic hypergeometric series. Let q∈, x=1/2(z+z^-1), z∈. Then, one has the following terminating quadratic transformation: 101q^-2n-q^2,q^2/z^2=(±qz;q)_∞01-q^2/z^2q^2,q^2-2n/z^2 =q^-1/2n^2/z^n(-q;q)_n31q^-n,q^1/2z^±-qq,-q^n =q^-1/2n^2/(-z)^n(q^1/2z^±;q)_n22-1± q^-nq^1/2-nz^±q,q. Comparing (<ref>), (<ref>), (<ref>) and making the replacement q↦ q^2 completes the proof. The above terminating quadratic transformation formula leads to an interesting summation formula. Let n∈_0, q∈. Then, one has the following summation formula 10-1q^-2n-q^2,q^2n∓ 1=q^-n(1/2±1/2)(-q;q)_n. Setting z=q^±1/2 in Theorem <ref> completes the proof. §.§ The continuous q-inverse Hermite polynomials The following result can be found in <cit.>. The result can be found using the q-Chaundy Theorem <ref>, but we provide a slightly different proof. This infinite product generating function was originally found in <cit.>. Let q∈, x=1/2(z+z^-1), z∈, |t|<1. Then ∑_n=0^∞t^nq^n2 H_n(x|q^-1)/(q;q)_n=(-tz^±;q)_∞. First start with the left-hand side of (<ref>) and use the terminating representation of the continuous q-inverse Hermite polynomials (<ref>). Then reversing the order of the summation followed by evaluating the outer sum using Euler's Theorem <ref>, the inner sum can be evaluated using the q-binomial theorem. This completes the proof. If one starts with the representation (<ref>) (or (<ref>)) and utilize the q-Chaundy Theorem <ref>, one arrives at a nonterminating product representation of the corresponding generating function, which happens to be divergent (it is proportional to a _2ϕ_0). One also has the following interesting generating function for continuous q-inverse Hermite polynomials cf. <cit.>. Let q∈, x=1/2(z+z^-1), z∈, |t|<1. Then N(x,t|q):= ∑_n=0^∞t^n q^1/4n^2 H_n(x|q^-1)/(q;q)_n =1/(t;q^1/2)_∞21q^1/4z^±-q^1/2q^1/2,-t. One should see the proof of <cit.>. Using the q-Chaundy Theorem <ref>, one is able to derive alternate expressions for the generating function for continuous q-inverse Hermite polynomials N(x,t|q). Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then N(x,t|q)=∑_n=0^∞t^n/(q^1/2;q^1/2)_n31q^-1/2n,q^1/4z^±-q^1/2q^1/2,-q^1/2n =∑_n=0^∞(q^1/4z^±;q^1/2)_n (-t)^n/(± q^1/2;q^1/2)_n22-1± q^-1/2nq^1/4-1/2nz^±q^1/2,q^1/2. Starting with the generating function (<ref>), and replacing q↦ q^2 converts the right-hand side to a form where the q-Chaundy Theorem <ref> can be used. Using r=p=-1, u=ℓ=0, s=v=1, X=t, Y=-t, c={q^1/2z^±}, d={-q}, a= b=∅, and then replacing q↦ q^1/2 completes the proof. Observe surprisingly that the alternate expressions for N(x,t|q) have the property that the terminating basic hypergeometric series are only functions of x, q and n. Comparing these expressions with the original generating function, we realize that in these terminating basic hypergeometric series must represent alternative basic hypergeometric representations for the continuous q-inverse Hermite polynomials! Remark <ref> leads us to the next important result. Let n∈_0, q∈, x=1/2(z+z^-1), z∈. Then H_n(x|q^-1)=q^-1/4n^2(q;q)_n/(q^1/2;q^1/2)_n31q^- 1/2n,q^1/4z^±-q^1/2q^1/2,-q^1/2n =(-1)^nq^-1/4n^2 (q^1/4z^±;q^1/2)_n22-1± q^-1/2nq^1/4-1/2nz^±q^1/2,q^1/2. Comparing the the terms of the series of the alternate expressions for the generating function N(x,t|q) in Theorem <ref> completes the proof. This then leads us to quadratic transformations for terminating basic hypergeometric series. Let n∈_0, q∈, x=1/2(z+z^-1), z∈. Then, one has the following terminating quadratic transformation: 101q^-2n-q^2,q^2/z^2=(±qz;q)_∞01-q^2/z^2q^2,q^2-2n/z^2 =q^-1/2n^2/z^n(-q;q)_n31q^-n,q^1/2z^±-qq,-q^n =q^-1/2n^2/(-z)^n(q^1/2z^±;q)_n 22-1± q^-nq^1/2-nz^±q,q. Comparing (<ref>), (<ref>), (<ref>), (<ref>) and making the replacement q↦ q^2 completes the proof. The above terminating quadratic transformation formula leads to an interesting summation formula. Let n∈_0, q∈. Then, one has the following summation formula: 101q^-2n-q^2,q^2∓1= (± q^1∓1/2;q)_∞01-q^2∓1q^2,q^-2n+2∓1 = q^-1/2n^2∓1/2 n (-q;q)_n. Setting z=q^±1/2 in Theorem <ref> completes the proof. Note that for the continuous q-Hermite polynomials there exists a generating function with arbitrary numerator dependence given by (γ;q)_n, i.e., (<ref>). We have not, as of yet, been able to derive an analogous generating function for the continuous q-inverse Hermite polynomials. ' to 0pt.2ex "16d10AskeyIsmail84 R. Askey and M. E. H. Ismail. Recurrence relations, continued fractions, and orthogonal polynomials. Memoirs of the American Mathematical Society, 49(300):iv+108, 1984. AtakishiyevaAtakishiyev11 M. Atakishiyeva and N. Atakishiyev. A non-standard generating function for continuous dual q-hahn polynomials. Revista de Matemática: Teoría y Applicaciones, 18(1):111–120, 2011. Bailey1928 W. N. Bailey. Products of Generalized Hypergeometric Series. Proceedings of the London Mathematical Society. Second Series, 28(4):242–254, 1928. Chaundy43 T. W. Chaundy. An extension of hypergeometric functions. I. The Quarterly Journal of Mathematics. Oxford Series, 14:55–78, 1943. ChristansenIsmail2006 J. S. Christiansen and M. E. H. Ismail. A moment problem and a family of integral evaluations. Transactions of the American Mathematical Society, 358(9):4071–4097, 2006. Clausen1828 T. Clausen. Über die Fälle, wenn die Reihe von der Form y=1+α/1·β/γ x+α·α+1/1· 2·β·β+1/γ·γ+1x^2 +etc. ein Quadrat von der Form z= 1+α'/1·β'/γ'·δ'/ε'x+α' ·α'+1/1· 2·β'·β'+1/γ'·γ'+1·δ' δ'+1/ε' ε'+1x^2 + etc. hat. Journal für die Reine und Angewandte Mathematik, 3:89–91, 1828. CohlCostasSantos20b H. S. Cohl and R. S. Costas-Santos. Symmetry of terminating basic hypergeometric representations of the Askey-Wilson polynomials. Journal of Mathematical Analysis and Applications, 517(1):126583, 2023. CohlIsmail20 H. S. Cohl and M. E. H. Ismail, editors. Lectures on orthogonal polynomials and special functions, volume 464 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2021. Sixth Summer School, Maryland, 2016. ErdelyiHTF A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi. Higher Transcendental Functions. Vols. 1-3. Robert E. Krieger Publishing Co. Inc., Melbourne, Fla., 1981. GaspRah G. Gasper and M. Rahman. Basic hypergeometric series, volume 96 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, second edition, 2004. With a foreword by Richard Askey. Ismail:2009:CQO M. E. H. Ismail. Classical and Quantum Orthogonal Polynomials in One Variable, volume 98 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2009. With two chapters by Walter Van Assche, With a foreword by Richard A. Askey, Corrected reprint of the 2005 original. IsmailMasson1994 M. E. H. Ismail and D. R. Masson. q-Hermite polynomials, biorthogonal rational functions, and q-beta integrals. Transactions of the American Mathematical Society, 346(1):63–116, 1994. IsmailZhang2022 M. E. H. Ismail, R. Zhang, and K. Zhou. Orthogonal polynomials of Askey–Wilson type. https://arxiv.org/abs/2205.05280 arXiv:2205.05280, 2022. Ismailetal2022 M. E. H. Ismail, R. Zhang, and K. Zhou. q-fractional Askey-Wilson integrals and related semigroups of operators. Physica D. Nonlinear Phenomena, 442:Paper No. 133534, 15, 2022. Koekoeketal R. Koekoek, P. A. Lesky, and R. F. Swarttouw. Hypergeometric orthogonal polynomials and their q-analogues. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010. With a foreword by Tom H. Koornwinder. vandeBultRains09 F. J. van de Bult and E. M. Rains. Basic hypergeometric functions as limits of elliptic hypergeometric functions. Symmetry, Integrability and Geometry: Methods and Applications, 5(059), 2009.
http://arxiv.org/abs/2307.05776v1
20230711200703
Probabilistic Unitary Formulation of Open Quantum System Dynamics
[ "Le Hu", "Andrew N. Jordan" ]
quant-ph
[ "quant-ph" ]
[email protected] Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA Institute for Quantum Studies, Chapman University, 1 University Drive, Orange, CA 92866, USA Institute for Quantum Studies, Chapman University, 1 University Drive, Orange, CA 92866, USA Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA We show explicitly that for any continuously evolving open quantum system, be it finite (d-dimensional) or countably infinite dimensional, its dynamics can be described by a time-dependent Hamiltonian and probabilistic combinations of up to d-1 (d →∞ for infinite dimensional case), instead of d^2-1, time-dependent unitary operators, resulting in a quadratic improvement in simulation resources. Importantly, both types of operations must be initial state-dependent in general, and thus the simulation is tailored to that initial state. Such description is exact under all cases, and does not rely on any assumptions other than the continuity and differentiability of the density matrix. It turns out that upon generalizations, the formalism can also be used to describe general quantum channels, which may not be complete positive or even positive, and results in a Kraus-like representation. Experimentally, the formalism provides a scheme to control a quantum state to evolve along designed quantum trajectories, and can be particularly useful in quantum computing and quantum simulation scenes since only unitary resources are needed for implementation. Philosophically, it provides us with a new perspective to understand the dynamics of open quantum systems and related problems such as decoherence and quantum measurement, i.e. the non-unitary evolution of quantum states can thereby be regarded as the combined effect of state-dependent deterministic evolutions and probabilistic applications of unitary operators. Probabilistic Unitary Formulation of Open Quantum System Dynamics and Andrew N. Jordan August 12, 2023 ================================================================== Introduction.—While the evolution of a closed quantum system can be well-described by the Schrödinger equation, realistic quantum systems are almost always affected by its surroundings. The dynamics of such open quantum system is generally non-unitary and hence requires a different description. Of the numerous descriptions <cit.> developed, the best-known one is the master equation in standard (Lindblad) form <cit.>, ρ̇=-i[H, ρ]+∑_i=1^d^2-1γ_i(L_iρ L_i^†-1/2{L_i^† L_i, ρ}). The derivation and application of the Lindblad master equation often relies on assumptions such as weak coupling limit or semigroup property <cit.>, hence is restricted in the Markovian regime. Many efforts have been made to extend the description in the non-Markovian regime, including Nakajima-Zwanzig equation <cit.>, time-convolutionless projection-operator technique <cit.>, correlated projection superoperator technique <cit.>, collisional models <cit.>, stochastic Schrödinger equation method <cit.>, rate operator quantum jump technique <cit.> and many more <cit.>. Nevertheless, establishing a simple, intuitive and unified framework to describe the dynamics of open quantum system in all regimes is still a long-standing open problem. In this Letter, we are going to show, by deriving merely from an ansatz, that the dynamics of any finite, d-dimensional (later generalized to infinite dimensional case), continuously evolving open quantum system ρ(t) can be precisely described by a time-dependent Hamiltonian and probabilistic combinations of d-1, instead of d^2-1 (as in Eq. (<ref>)), time-dependent unitary operators, ρ̇(t)=-i[H(t),ρ(t)]+∑_i=0^d-1q_i(t)(𝒰̃_i,tρ(t)𝒰̃^†_i,t-ρ(t)), where H(t) is the Hamiltonian that can drive all instantaneous eigenvectors {|ψ_i(t)⟩} of the density matrix ρ(t) of the system, i.e. H(t)|ψ_i(t)⟩=i|_tψ_i(t)⟩ i=1,2, … ,d, 𝒰̃_i,t is the ith traceless (except for i=0), linearly-independent (i.e. Tr[𝒰̃_i,t^†𝒰̃_j,t]=δ_ijd) and time-dependent unitary operators defined by 𝒰̃_i,t≡𝕌_t 𝒰_i𝕌^†_t, where 𝕌_t is the unitary operator that diagonalizes the density matrix ρ(t), ρ(t)=𝕌_tρ_D(t)𝕌^†_t, and 𝒰_i∈{U_n m∈ℝ^d × d| n, m ∈{0,1, …, d-1}} is the ith real Weyl operator <cit.> (specially, 𝒰_0=𝒰̃_0,t=1), U_nm=∑_k=0^d-1e^2 πi/d k n|k⟩⟨(k+m) d|. The q_i(t), which is solvable from a system of equations, denotes the rate so that q_i(t)dt is the probability that 𝒰̃_i,t is applied to the quantum state during the time interval [t, t+dt]. Since q_i(t)dt has a clear physical meaning, one can control a quantum state to evolve along a designed quantum trajectory ρ(t), by setting a short time interval dt and applying a time-dependent Hamiltonian H(t) and unitary operators 𝒰̃_i,t with classical probability q_i(t)dt during the time interval [t,t+dt]. For instance, such applications of unitary operators can be randomly determined by a classical computer such that for ignorant observers, the quantum states appear to evolve non-unitarily to them based on their measurements. It has to be noted that we can go beyound the description given above by letting q_i(t) be negative or even singular for finite dimensional systems, the significance of which we shall discuss in detail. Since experimentally one cannot make an event happen with negative or singular probability, though such phenomena occur naturally as we shall see, such quantum control scheme will only work in the cases where q_i(t) ≥ 0 for all i, which corresponds to the evolution with contracting trace distance <cit.> and non-increasing purity [Supplemental Materials I]. In other word, while we can numerically simulate non-Markovian cases where q_i(t)<0, they cannot be physically implemented by the above means. It is worth noting that since q_i(t)dt represents probability, the direction of information flow becomes crystal-clear, as there is generally no information back-flow from the environment to the system over all channels 𝒰̃_i,tρ(t)𝒰̃^†_i,t if and only if q_i(t) ≥ 0 for all i. Probabilistic unitary formulation.—Below we will derive in detail the formulation for the finite dimensional case, which is completely based on the ansatz that the continuous dynamics of any finite dimensional open quantum system can be regarded as the combined effects of probabilistic combination of unitary operators, plus a time-dependent Hamiltonian. Mathematically, the ansatz can be formulated as ρ(t+dt) =(1-∑_i=1^mq_i(t)dt)U_t(dt)ρ(t)U^†_t(dt) +∑_i=1^m(q_i(t)dt 𝒰̃_i,tρ(t)𝒰̃^†_i,t), where U_t(dt) and 𝒰̃_i,t are some unitary operators yet to be determined. The physical meaning of the equation is that during the time interval [t,t+dt], there is probability q_i(t)dt that the quantum state will be subject to the unitary transformation 𝒰̃_i,t, and probability (1-∑_i=1^mq_i(t)dt) that the quantum state will evolve unitarily by U_t(dt). By Taylor expanding U_t(dt)=(1-i H(t)dt+𝒪(dt^2)), and calculating ρ̇(t) according to its limit definition ρ̇(t)=lim_dt→0 (ρ(t+dt)-ρ(t))/dt, one can straightforwardly recover Eq. (<ref>), except that the minimum value of m, and the explicit expression of H(t), 𝒰̃_i,t and q_i(t) are undetermined. To determine their explicit expressions, let us first consider the simpler case that the density matrices at time t and t+dt happen to be simultaneously diagonalized, denoted as ρ_D(t) and ρ_D(t+dt), and that H(t)=0. In this simplified scenario, only the eigenvalues of the density matrix changes over time, hence it becomes much easier to track their changes. Let us denote the eigenvalues of ρ_D(t) by p_1(t), …,p_d(t), and the eigenvalues of ρ_D(t+dt) by p_1(t)+f_1(t)dt, …, p_d(t)+f_d(t)dt; we need to find a set of {𝒰_i,t} to describe the changes of eigenvalues via Eq. (<ref>). It turns out that {𝒰_i} can be chosen to be the real Weyl operators (Eq. (<ref>)), where the i denotes the number of 1's in the lower left conner of 𝒰_i. The effect of those real Weyl operators is that they “rotate” the eigenvalues cyclically, such that diag(𝒰_iρ_D(t)𝒰^†_i)=(p_1+i, p_2+i, …, p_d+i), where the + here is defined as modular addition (i.e. i+j → (i+j) d). There are d-1 non-identity real Weyl operators, and since we have assumed H(t)=0, we have U_t(dt)=1, resulting in a total number of d operators, a minimum sufficient number to make the eigenvalues cycle back. Plugging in the expression of 𝒰_i, U_t(dt), ρ_D(t) and ρ_D(t+dt) into Eq. (<ref>), one obtains a system of equations, which can be written in the matrix form, [ p_1 p_d p_d-1 ⋯ p_2; p_2 p_1 p_d ⋯ p_3; ⋮ ⋯ ⋮; p_d p_d-1 p_d-2 ⋯ p_1 ]_P[ -q_0; q_1; ⋮; q_d-1 ]_q⃗=[ f_1; f_2; ⋮; f_d ]_f⃗ where we have defined q_0(t)=∑_i=1^d-1q_i(t). The matrix P, which is a doubly stochastic circulant matrix and thus can be easily diagonalized by the discrete Fourier transform, is almost always non-singular so that we have the unique solution for q⃗ for a given f⃗. Moreover, it can be shown by straightforward algebra that ∑_i=1^d f_i=0 if and only if -q_0+∑_i=1^d-1 q_i=0, indicating that the solution q⃗ conserves probability if and only if f⃗ conserves the trace. To address the possible singularity of P, we have the following theorem: Theorem <cit.>. If {v_j}_0≤ j ≤n-1 is a nonincreasing or nondecreasing sequence of nonnegative or nonpositive real numbers, then the circulant matrix V = circ(v_0, v_1, ⋯, v_n-1) is singular if and only if for some integer d| n, d ≥ 2, the vector v=(v_0,v_1,...,v_n-1) consists of nd consecutive constant blocks of length d. Since the eigenvalues of the density matrix is always nonnegative, and since we can always rearrange the eigenvalues in a nonincreasing or nondecreasing manner by a unitary transformation, the above theorem is applicable in our case, meaning that P is singular if and only if the eigenvalues of the density matrix can be arranged into consecutive constant blocks of the same length (e.g. diag(ρ_D)=(a,a,b,b,c,c) where a≥ b≥ c). When such singular P indeed occurs, one can calculate P^-1 analytically in the regime where P is invertible, and assume P is invertible everywhere to obtain the solution of q⃗. Such solution of q⃗ will be singular where P^-1 is singular, yet we stress that one can still use the solution of q⃗ to describe the dynamics. Such rate q⃗ with singular point(s) occurs naturally, for example, when a qubit goes through its maximally mixed state, as we shall see in Example I. It is worth noting that while such singularity does not hamper the description of the dynamics, it will make the quantum control scheme break down around the singular points of q⃗, as q_i(t)dt for some i will blow up and become unphysical for a preset time interval dt. We have shown how Eq. (<ref>) can be used to describe the open quantum system dynamics when the density matrix happens to be diagonalized and H(t)=0. One can then use Eq. (<ref>) and the limit definition ρ̇_D(t)=lim_dt → 0 (ρ_D(t+dt)-ρ_D(t))/dt to obtain the master equation for the diagonalized density matrix, ρ̇_D(t)=-q_0(t)ρ_D(t)+∑_i=1^d-1q_i(t) 𝒰_i ρ_D(t)𝒰^†_i, where the rate q_i(t) is solvable from Eq. (<ref>) and 𝒰_i is the ith real Weyl operators. To generalize the above description to the non-diagonal cases, one only needs to establish explicitly the relation between ρ_D(t), ρ̇_D(t) and ρ(t), ρ̇(t), respectively, and plug in the expression of ρ_D(t) in terms of ρ(t), and ρ̇_D(t) in terms of ρ̇(t), into the above equation. To do so, assume that ρ(t) can be diagonalized by 𝕌_t, i.e. ρ(t)=𝕌_t ρ_D(t)𝕌^†_t, and that ρ(t+dt) can be diagonalized by 𝕌_t+dt. Since ρ(t) is assumed to evolve continuously, 𝕌_t+dt should only differ from 𝕌_t by an infinitesimal amount. In other words, there should exist a unitary operator, denoted as U_t(dt), which can connect 𝕌_t and 𝕌_t+dt, such that 𝕌_t+dt=U_t(dt)𝕌_t. Since the column vectors of 𝕌_t and 𝕌_t+dt are composed by the instantaneous eigenvectors |ψ_i(t)⟩ and |ψ_i(t+dt)⟩ of ρ(t) and ρ(t+dt), respectively, the requirement of 𝕌_t+dt=U_t(dt)𝕌_t is equivalent to the requirement of U_t(dt)|ψ_i(t)⟩=|ψ_i(t+dt)⟩, for each i=1,2,…,d. This is just the Schrödinger equation should we expand U_t(dt)=(1-iH(t)dt+𝒪(dt^2)) and also |ψ_i(t+dt)⟩ by Taylor series, and the Hamiltonian H(t) has to be such that it solves |ψ_i(t)⟩ for all i. A scheme to find the Hamiltonian has been solved in our recent work <cit.>, where one can construct the Hamiltonian by H(t)=i ∑_i=1^d∂_t ψ̃_i(t)ψ̃_i(t), where ψ̃_i(t)≡ e^iϕ_i(t)|ψ_i(t)⟩ and ϕ_i(t)≡∫-i⟨∂_tψ_i(t)|ψ_i(t)⟩ d t. The Hamiltonian constructed in this way is optimal in the sense that it has the minimum Hilbert-Schmidt norm H_HS=Tr[H^2(t)] [Supplemental Materials II], making its experimental implementation easier. One may also simply let H(t)=i ∑_i ∂_t ψ_i(t)ψ_i(t) if such optimization is unneeded. To establish the relation between ρ̇(t) and ρ̇_D(t), first write ρ̇(t) in its limit definition ρ̇(t)=lim_dt → 0(ρ(t+dt)-ρ(t))/dt, then plug in ρ(t)=𝕌_t ρ_D(t)𝕌^†_t and ρ(t+dt)=𝕌_t+dtρ_D(t+dt)𝕌^†_t+dt, and finally make use of 𝕌_t+dt=U_t(dt)𝕌_t, U_t(dt)=(1-iH(t)dt+𝒪(dt^2)) and ρ_D(t+dt)=ρ_D(t)+ρ̇_D(t)dt+𝒪(dt^2), from which one obtains ρ̇(t)=-i[H(t), ρ(t)] +𝕌_t ρ̇_D(t) 𝕌_t^†, or equivalently, ρ̇_D(t)=𝕌_t^†(ρ̇(t)+i[H(t), ρ(t)]) 𝕌_t. By plugging in the above equation of ρ̇_D(t) and ρ_D(t)=𝕌_t^†ρ(t) 𝕌_t into Eq. (<ref>), one can readily recover the main result of the paper, Eq. (<ref>). Since the only assumption made in the above derivations is the continuity of ρ(t), Eq. (<ref>) is exact under all cases and thus very general. Moreover, from Eq. (<ref>), one can immediately obtain the semigroup master equation, by taking the generator ℒ_t, defined by ρ̇(t)=ℒ_t ρ(t), to be time-independent. A simple choice then is letting H(t), 𝒰̃_i,t and q_i(t) all to be time-independent, and the complete positivity further requires q_i≥0 for all i by the Gorini-Kossakowski-Sudarshan-Lindblad theorem <cit.>. Note that the time independency of H, 𝒰̃_i and q_i is only a sufficient requirement, not a necessary one. As we shall see in Example II, contrary to the usual belief, a semigroup master equation can also contain time-dependent parameters. Discussions.—We highlight that the formalism developed above can in fact be used to describe the master equation of any Hermitian matrix ℋ(t) via Eq. (<ref>). In this general case, since f⃗ does not have to conserve the trace, the solution q⃗ does not have to conserve the probability. On the other hand, one can also use the formalism to describe the general quantum channel that may not be complete positive or even not positive, which could occur naturally when the initial state is correlated with the environment <cit.>. If we denote the dynamical map by Φ_t(s), then any quantum channel ρ(t+s)=Φ_t(s)[ρ(t)] can be described by [Supplemental Materials III] ρ(t+s) =∑_i=0^d-1 q_i(t,s) 𝒰̃_i;t,sρ(t) 𝒰̃^†_i;t,s, where 𝒰̃_i;t,s =U_t(s)𝕌_t𝒰_i 𝕌^†_t. The U_t(s) is defined by 𝕌_t+s=U_t(s)𝕌_t, where 𝕌_t+s and 𝕌_t is the unitary matrix diagonalizing ρ(t+s) and ρ(t), respectively. The q_i(t,s), which can be solved by a similar manner from Eq. (<ref>) (specially, q_0(t,s)≡ 1-∑_i=1^d-1q_i(t,s)), denotes the (possibly negative or singular) probability the 𝒰̃_i;t,s is applied to ρ(t). If ∑_i q_i(t,s)=1 and q_i(t,s) ∈ [0,1] for all i, then the quantum channel Φ_t(s) is called mixed unitary or random unitary <cit.>. It is interesting to note that Lee and Watrous <cit.> proved that detecting whether a quantum channel is mixed unitary is NP-hard, yet we just show that any quantum channel can be written in the mixed unitary form if one relaxes the restrictions on q_i(t,s) by allowing negative and singular probability. Moreover, one can verify that ∑_i=0^d-1(√(q_i(t,s))𝒰̃_i;t,s)_𝒦_i(√(q_i(t,s))𝒰̃^†_i;t,s)_𝒦̅_i=1, and rewrite Eq. (<ref>) by ρ(t+s)=∑_i=0^d-1𝒦_i ρ(t)𝒦̅_i, which has a very similar form as the Kraus representation ∑_i K_i ρ(t)K_i^† with the constraint ∑_i K_i K_i^†=1, except that in the above case, 𝒦̅_i=𝒦_i^† (if q_i(t,s) ≥ 0) or 𝒦̅_i=-𝒦_i^† (if q_i(t,s) ≤ 0). Different from the Kraus representation, which works only for the trace-preserving and complete positive map, Eq. (<ref>) under the constraint Eq. (<ref>) is valid for any trace-preserving map. The whole formalism above can be easily generalized to the countably infinite dimensional case, which delivers the master equation ρ̇(t)=-i[H(t),ρ(t)]+∑_i=0^∞q_i(t)(𝒰̃_i,tρ(t)𝒰̃^†_i,t-ρ(t)), where H(t) and 𝒰̃_i,t are defined the same way as in Eq. (<ref>)-(<ref>), and 𝒰_i is instead defined as 𝒰_i=∑_m=0^∞|m+i⟩⟨m|, i∈ℕ, which satisfies the properties (𝒰^†_i𝒰_j)=(𝒰̃^†_i,t𝒰̃_j,t)=∑_k=1^∞δ_ij. The detailed derivation, shown in the Supplemental Materials IV, shows that for infinite dimensional case, q⃗ is always nonsingular as long as f⃗ is nonsingular. This is because the P matrix, which would be defined by P_ij=p_i-j+1, where p_0=p_-1=p_-2=…=0 and p_1, p_2, …, are the eigenvalues of ρ(t), turns out to be an infinite dimensional lower triangular Toeplitz matrix and can always be made invertible by choosing p_1≠0. Below we will show a few examples followed with comments to demonstrate our formalism. Example I - Jaynes-Cummings model.—Consider the Jaynes-Cummings model <cit.> under the rotating-wave approximation, H_SE=ħω_c a^† a+ħω_aσ_z/2+ħΩ/2(a σ_++a^†σ_-), where σ_±=σ_x ± iσ_y. The model describes a two-level atom of resonant frequency ω_a interacting with a single-mode field in a cavity with field frequency ω_c and interaction strength Ω. Assuming that initially, the cavity is in the vacuum state |0⟩ and the atom is in the excited state |e⟩ with ω_a=ω_c, then the state of the total system at a later time is given by <cit.> |ψ(t)⟩=cos(Ω t/2)|e, 0⟩-i sin(Ω t/2)|g, 1⟩. We can then write the above solution in the density matrix form ρ(t)=|ψ(t)⟩⟨ψ(t)| and take the partial trace over the cavity to obtains the density matrix of the atom ρ_S(t), ρ_S(t)=Tr_E(ρ)=([ cos ^2(Ω t/2) 0; 0 sin ^2(Ω t/2) ]), which describes the evolution of the atom alone by ignoring the state of the photon. We would like to write down a master equation for ρ_S(t) by using the formulation we developed, which can be obtained by the direct application of Eq. (<ref>) since ρ_S(t) is of the diagonal form, ρ̇_S(t)=-i[H_S, ρ(t)]+q_1(t)(σ_x ρ_S(t) σ_x-ρ_S(t)), where q_1(t)=1/2Ωtan(Ω t). There are a few comments worth mentioning regarding the solution. First, in this specific example, the Hamiltonian H_S can be taken to be anything as long as [H_S, ρ(t)]=0, which is also evident from Eq. (<ref>). Second, the rate q_1(t) can be negative, indicating the information back flow from the cavity. Moreover, since q_1(t) denotes the rate of an event happening, it is not unphysical at all if q_1(t) →∞ as t →π/2 Ω. To confirm this, we can calculate ρ̇(t) at time t=π/2 Ω via Eq. (<ref>) to obtain ρ̇_S(t)=diag(-Ω2, Ω2), which is exactly what we would obtain via Eq. (<ref>). Interestingly, this is not the case for the mixed unitary channel, which is unital, i.e. Φ(1)=1, meaning that a maximally mixed state will always remain maximally mixed after the application of a unital channel, hence ρ̇_S=0. Yet we see clearly that the ρ_S(t) above escapes from being maximally mixed at the time t=π2Ω and ρ̇_S(π2 Ω)≠0. This phenomena, which is counterintuitive and breaks the property of unital channel, is related with the singular probability. In the above example, even though the two matrices, -q_1(t)ρ_S(t) and q_1(t)σ_xρ_S(t)σ_x, become singular at t=π2 Ω, their sum is nevertheless finite because the infinite terms cancel with each other, somewhat resembling the renormalization in the quantum field theory. It is worth noting that such singular rate, which has been previously reported in literature <cit.>, contains rich physics and it is related with entanglement sudden death <cit.>. By Eq. (<ref>), the singular rate occurs whenever the matrix P is non-invertible, which depends solely on the eigenvalues of the density matrix. Example II - Decay of a two-level atom.—Consider the well-known model that a two level atom spontaneously decaying due to its interaction with vacuum, the process of which can be described by ρ̇(t)=-i[H, ρ(t)]+Γ(σ^-ρ(t) σ^+-1/2{σ^+σ^-, ρ(t)}), where Γ denotes the coupling strength between the atom and the vacuum and H=ħωσ_z/2. For simplicity, let us assume that the atom is initially in the excited state. Then by straightforward calculation, it can be shown that the dynamics can be equivalently described by ρ̇(t)=-i[H, ρ(t)]+Γρ_11(t)ρ_11(t)-ρ_22(t)(σ_x ρ(t) σ_x-ρ(t)). Notice how Eq. (<ref>), a semigroup master equation, can actually contain time dependent (and even negative) parameters when it is cast into the probabilistic unitary form Eq. (<ref>). Different from usual master equations such as Eq. (<ref>), the probabilistic unitary master equation generally is state-dependent, meaning that if we have a different initial state, then the Eq. (<ref>) would have a different form. Hence one must write down different probabilistic unitary master equations for different quantum processes even though such processes can be described by a single master equation by the conventional method. Conclusion.—In this Letter, we have derived the master equations which can exactly describe open quantum system dynamics across all regimes. These equations, i.e. Eq. (<ref>), (<ref>) and (<ref>), combined with the Schrödinger equation, suggest that all non-relativistic quantum processes are either unitary or probabilistic unitary, including both the continuous and discontinuous (i.e. quantum jump) dynamics in an open or closed quantum system with finite or infinite Hilbert space. The results therefore unifies quantum mechanical dynamics by providing a unitary-operator description in non-unitary regime, enabling us to understand non-unitary quantum processes, such as decoherence and wave function collapse, in a new and more coherent perspective. Practically, it also suggests a scheme to control quantum state to evolve along a designed trajectories, which could be particularly useful in quantum simulation scenes as only unitary resources are needed. Acknowledgement.—We thank Luiz Davidovich for inspiring discussions. We are also grateful to the support from the Army Research Office (ARO) under the grant W911NF-22-1-0258. ieeetr Supplemental Materials I Below, we will show that if q_i(t) ≥ 0 for all i, then Eq. (<ref>) describes dynamics with non-increasing purity. Since ddtTr[ρ^2(t)]=2(ρ̇(t)ρ(t)), by plugging ρ̇(t) defined by Eq. (<ref>) into 2(ρ̇(t)ρ(t)), one obtains ddtTr[ρ^2(t)] =2 [( -i[H(t),ρ(t)]+∑_i=0^d-1q_i(t)(𝒰̃_i,tρ(t)𝒰̃^†_i,t-ρ(t)))ρ(t)] =-2(i[H(t),ρ(t)]ρ(t))_=0+2∑_i=0^d-1q_i(t)[𝒰̃_i,tρ(t)𝒰̃^†_i,t_≡ρ_i^'(t)ρ(t)]-2∑_i=0^d-1q_i(t)[ρ^2(t)] =2∑_i=0^d-1q_i(t)([ρ_i^'(t)ρ(t)]-[ρ^2(t)]), where the term -2(i[H(t),ρ(t)]ρ(t))=0 by the cyclic permutations of trace. If one decomposes the density matrix into its Bloch vector form ρ(t)=1dI+v⃗_ρ·Λ⃗, where Λ⃗ is formed by its SU(n) basis (e.g. generalized Gell-Mann matrices) <cit.>, then [ρ_i^'(t)ρ(t)]=v⃗_ρ^'·v⃗_ρ = |v⃗_ρ^'||v⃗_ρ|cosθ≤ |v⃗_ρ||v⃗_ρ|=v⃗_ρ·v⃗_ρ=[ρ^2(t)] where θ is the angle between v⃗_ρ and v⃗_ρ^'. Note that |v⃗_ρ|=|v⃗_ρ^'| since unitary transformation does not change the length of the Bloch vector <cit.>. If q_i(t)≥0 for all i, one can then readily concludes ddtTr[ρ^2(t)] ≤ 0. Supplemental Materials II To see that the Hamiltonian H(t)=i ∑_i |∂_tψ̃_i(t) ⟩⟨ψ̃_i(t) | defined by Eq. (<ref>) indeed has the minimum Hilbert-Schmidt norm, we need to calculate the instantaneous energy variance [Δ H(t)]_ρ_i(t)^2=Tr[ρ_i(t) H^2(t)]-(Tr[ρ_i(t) H(t)])^2, where ρ_i(t)=ψ̃_i(t)ψ̃_i(t), assuming that ρ_i(t) evolves unitarily. By making use of the fact that ψ̃_i(t)∂_t ψ̃_i(t)=0 <cit.>, we obtain [Δ H(t)]^2 =Tr [ρ_i(t) H^2(t) ]- (Tr [ρ_i(t) H(t) ] )^2 = |∂_tψ̃_i(t) ⟩⟨∂_tψ̃_i(t) |-(⟨ψ̃_i(t) | ∂_tψ̃_i(t) ⟩_=0 for all i)^2 = |∂_tψ̃_i(t) ⟩⟨∂_tψ̃_i(t) | Since the instantaneous energy variance [Δ H(t)]_ρ_i(t)^2 for a given trajectories ρ_i(t) is a fixed value independent of the explicit form of H(t) (provided that H(t) solves ρ_i(t)) <cit.>, a minimized (Tr[ρ_i(t) H(t)])^2=0 implies a minimized Tr[ρ_i(t) H^2(t)]. By Tr[H^2(t)]=Tr [∑_i ρ_i (t) H^2(t)]=∑_iTr [ ρ_i (t) H^2(t)], we conclude that for a given ρ(t), if H(t) solves every eigenstates of ρ(t) and Tr [ ρ_i (t) H^2(t) ] is minimized for all i, then Tr[H^2(t)] is also minimized. Supplemental Materials III In the following, we will derive Eq. (<ref>) in the main text. Denote the eigenvalues of ρ_D(t) by p_1(t), ⋯, p_d(t) and the eigenvalues of ρ_D(t+s) by p_1(t)+f_1(t,s), ⋯, p_d(t)+f_d(t,s), where f_i(t,s) denotes the change of p_i(t) during the time interval [t,t+s]. By the same method described in the main text, one can obtain a system of equations written in the matrix form [ p_1 p_d p_d-1 ⋯ p_2; p_2 p_1 p_d ⋯ p_3; ⋮ ⋯ ⋮; p_d p_d-1 p_d-2 ⋯ p_1 ]_P[ q_0-1; q_1; ⋮; q_d-1 ]_q⃗=[ f_1; f_2; ⋮; f_d ]_f⃗, and the expression for the quantum channel for the diagonalized density matrix ρ_D(t+s)=∑_i=0^d-1 q_i(t,s) 𝒰_i ρ_D(t)𝒰_i^†. Note that here q_0(t,s) ≡ 1-∑_i=1^d-1 q_i(t,s), different from the definition of q_0 in the continuous case as the new definition will make the final result in a nicer form. By plugging in ρ_D(t)=𝕌^†_t ρ(t) 𝕌_t, ρ_D(t+s)=𝕌^†_t+sρ(t+s) 𝕌_t+s and 𝕌_t+s=U_t(s)𝕌_t, one obtains 𝕌^†_t U^†_t(s)ρ(t+s)U_t(s)𝕌_t=∑_i=0^d-1 q_i(t,s) 𝒰_i𝕌^†_t ρ(t) 𝕌_t𝒰^†_i ⇒ρ(t+s)=∑_i=0^d-1q_i(t,s) U_t(s)𝕌_t𝒰_i 𝕌_t^†_𝒰̃_i;t,sρ(t)𝕌_t 𝒰^†_i𝕌^†_t U^†_t(s)_𝒰̃^†_i;t,s which recovers Eq. (<ref>). The expression of U_t(s) is non-unique, and a simple choice would be U_t(s)=∑_i=0^d-1|ψ_i(t+s)⟩⟨ψ_i(t)| where |ψ_i(t+s)⟩ and |ψ_i(t)⟩ are the instantaneously eigenvectors of ρ(t+s) and ρ(t), respectively. Supplemental Materials IV In the following, we will generalize the results in the main text to the cases of countably infinite dimensional Hilbert space. The key is to generalize Eq. (<ref>), i.e. finding an infinite dimensional matrix P and the shift unitary operator 𝒰_i that plays the similar role of real Weyl operator in finite dimensional case. It turns out that instead of a circulant matrix, the matrix P can be defined as a lower triangular Toeplitz matrix in the infinite dimensional case, such that [ p_1 p_0 p_-1 p_-2 ⋯; p_2 p_1 p_0 p_-1 ⋯; p_3 p_2 p_1 p_0 ⋯; p_4 p_3 p_2 p_1 ⋯; ⋮ ⋮ ⋮ ⋮ ⋱; ]_P[ -q_0; q_1; q_2; q_3; ⋮ ]_q⃗=[ f_1; f_2; f_3; f_4; ⋮ ]_f⃗, where p_0=p_-1=p_-2=…=0, and p_1,p_2,…, are the eigenvalues of the density matrix ρ(t). Since any infinite dimensional triangular matrix is invertible if and only if all entries on the diagonal are nonzero, and since we can always rearrange the eigenvalues of the density matrix such that p_1 is nonzero, the matrix P can always be made invertible in the infinite dimensional case. Moreover, since f_n=∑_j=1^∞ P_nj[q⃗]_j=∑_j=1^∞ p_n-j[q⃗]_j, and 1⃗·f⃗=∑_n=1^∞ f_n we have ∑_n=1^∞ f_n=1⃗·f⃗=1⃗· P ·q⃗=(∑_i=1^∞ p_i, ∑_i=1^∞ p_i-1, ∑_i=1^∞ p_i-2,⋯) q⃗=(1,1,1…)q⃗=∑_n=1^∞[q⃗]_n, meaning that q⃗ conserves the probability (i.e. ∑_j=1^∞ [q⃗]_j=0) if and only if f⃗ conserves the trace (i.e. ∑_n=1^∞ f_n=0). The corresponding shift operator 𝒰_i associated with q_i is defined as 𝒰_i=∑_m=0^∞|m+i⟩⟨m|, i∈ℕ. One can then write down the infinite dimensional master equation Eq. (<ref>).
http://arxiv.org/abs/2307.04542v1
20230710131729
Customizing Synthetic Data for Data-Free Student Learning
[ "Shiya Luo", "Defang Chen", "Can Wang" ]
cs.CV
[ "cs.CV" ]
Customizing Synthetic Data for Data-Free Student Learning Shiya Luo Zhejiang University Hangzhou, China [email protected] Defang Chen Zhejiang University Hangzhou, China [email protected] Can Wang Zhejiang University Hangzhou, China [email protected] August 12, 2023 =============================================================================================================================================================================================================== Data-free knowledge distillation (DFKD) aims to obtain a lightweight student model without original training data. Existing works generally synthesize data from the pre-trained teacher model to replace the original training data for student learning. To more effectively train the student model, the synthetic data shall be customized to the current student learning ability. However, this is ignored in the existing DFKD methods and thus negatively affects the student training. To address this issue, we propose Customizing Synthetic Data for Data-Free Student Learning (CSD) in this paper, which achieves adaptive data synthesis using a self-supervised augmented auxiliary task to estimate the student learning ability. Specifically, data synthesis is dynamically adjusted to enlarge the cross entropy between the labels and the predictions from the self-supervised augmented task, thus generating hard samples for the student model. The experiments on various datasets and teacher-student models show the effectiveness of our proposed method. Code is available at: https://github.com/luoshiya/CSDhttps://github.com/luoshiya/CSD data-free knowledge distillation, self-supervision, model compression § INTRODUCTION In recent years, convolutional neural networks (CNNs) have achieved remarkable success in various applications <cit.> with over-parameterized architectures. But its expensive storage and computational costs make model deployment on mobile devices difficult. Therefore, knowledge distillation (KD) <cit.> comes into play to compress models by transferring dark knowledge from a well-trained cumbersome teacher model to a lightweight student model. The prevailing knowledge distillation methods <cit.> depend on a strong premise that the original data utilized to train the teacher model is directly accessible for student training. However, this is not always the case in some practical scenarios where the data is not publicly shared due to privacy, intellectual property concerns or excessive data size etc. Data-free knowledge distillation (DFKD) <cit.> is thus proposed to solve this problem. Existing DFKD methods generally divide each training round into two stages: data synthesis and knowledge transfer. Two different approaches are proposed in the data synthesis stage: model inversion inputs the random Gaussian noise into the fixed teacher model and iteratively updates the input via the back-propagation from the teacher model <cit.>; generative reconstruction utilizes a generator network to learn a mapping from the low-dimensional noise to the desired high-dimensional data manifold close to the original training data <cit.>. In the knowledge transfer stage, the synthetic data from the previous stage is used to train the student model with the regular knowledge distillation procedure. As training progresses, easy samples bring little new knowledge and contribute less to the student learning. The key to improvement of the student learning ability is to provide the student with hard samples in training such that it can continuously acquire new knowledge. Some existing adversarial DFKD methods generate hard samples on which the student disagree with the teacher by enlarging the divergence between their prediction distribution <cit.> (see Fig. <ref>). However, the teacher has not been trained on such synthetic samples, and thus soft predictions for many samples are likely to be inaccurate. The student will experience minimal improvement, or even a decline, in its learning ability when attempting to imitate the teacher on those incorrect samples (as shown in Fig. <ref>). Furthermore, it is difficult to manually evaluate whether soft predictions of the teacher is correct. In this paper, we propose Customizing Synthetic Data for Data-Free Student Learning (CSD), which directly takes the current student learning ability as a reference to adaptively synthesize hard samples and the learning ability is estimated through a self-supervised augmented auxiliary task that learns the joint distribution of the classification task and the self-supervised rotation task. In this way, the capability of capturing semantic information can serve as a good indicator of the student learning ability, and the auxiliary task can effectively verify how well the student understand semantics <cit.>. An extra auxiliary classifier appended to the student feature extractor learns the self-supervised augmented auxiliary task in knowledge transfer stage and then estimates the current student learning ability as an evaluator in data synthesis stage by calculating the divergence between labels and predictions from the auxiliary task. In this way, we accurately generate hard samples relative to current student learning ability by enlarging this divergence in an adversarial way. Different from the traditional adversarial objective <cit.>, we use the student model itself rather than the pre-trained teacher model to estimate the sample difficulty of the synthetic data (see Fig. <ref>), which is more reliable for the student training and beneficial for the student performance improvement. As shown in Fig. <ref>, the student improves its learning ability with our hard samples and are not easily disturbed by the teacher misinformation. Our contributions are summarized as follows: * We propose a novel method to dynamically generate hard samples based on the current learning ability of the student in the data-free knowledge distillation scenario. * An auxiliary classifier is used to learn a self-supervised augmented task, and also acts as an evaluator to estimate the student learning ability for hard data synthesis. * We conduct extensive experiments on various datasets and teacher-student model architectures. Experimental results confirm the effectiveness of our method. § PROPOSED METHOD The overview of our proposed CSD framework is shown in Fig. <ref>. The framework consists of a fixed pre-trained teacher, a generator, a student and an auxiliary classifier appended to the student feature extractor. The generator and the auxiliary classifier are trained in an adversarial manner. In data synthesis stage, the generator would explore hard samples based on the student learning ability with the auxiliary classifier. In knowledge transfer stage, the auxiliary classifier tries to improve its own evaluating ability. Two stages are executed alternately until convergence. §.§ Data Synthesis In data synthesis stage, we follow CMI <cit.> to synthesize data x̃∈ℝ^H× W × C (H, W, C denote the height, width and channel number, respectively) from a pre-trained teacher model as the surrogate for original training data x. We jointly update random noise vector z and the parameters θ_g of the generator 𝒢 to obtain x̃=𝒢(z) for n_g steps in each training round. The generator provides stronger regularization on pixels due to the shared parameters θ_g. Although the main purpose of our work is to synthesize hard data based on the current ability of the student itself, if we synthesize data only by the student, this may make the distribution of the synthetic data far away from the original training data due to the lack of data prior constraints. The optimization objective of data synthesis consists of two components and is formulated as: min_z,θ_gℒ_narrow-αℒ_csd, where ℒ_narrow aims to narrow the gap between the synthetic data and the original training data with the help of the well-trained teacher model for alleviating outliers, and ℒ_csd estimates the learning ability of the student. We will elaborate these two terms later. Narrowing the Distribution Gap. To make synthetic data more realistic, we adopt the following optimization objective to narrow the gap between the distribution of synthetic data and original training data: ℒ_narrow = ℒ_cls + ℒ_bns, ℒ_cls represents an one-hot assumption that if the synthetic data have the same distribution as that of the original training data, the prediction of the synthetic data by the teacher model would be like a one-hot vector <cit.>. Therefore, ℒ_cls is calculated as the cross entropy between the teacher prediction 𝒯(x̃) and the pre-defined label ỹ: ℒ_cls=CrossEntropy(ỹ, 𝒯(x̃)), ℒ_bns is a constraint that effectively utilizes statistics stored in the batch normalization (BN) layers of the teacher as data prior information <cit.>. It employs running mean μ_l and running variance σ_l^2 of the l-th BN layer as feature statistics of original training data. ℒ_bns is then calculated as the l2-norm distance between features statistics of synthetic data x̃ and original training data: ℒ_bns=∑_l(‖μ̃_l(x̃)-μ_l‖_2+‖σ̃_l^2(x̃)-σ_l^2‖_2), where μ̃_l(x̃) and σ̃_l^2(x̃) are mean and variance of the feature maps at the l-th teacher layer, respectively. Customizing Synthetic Data for the Student. In each training round, it is necessary to synthesize data adaptively according to the current student learning ability, so as to prevent the student from repeatedly learning oversimple samples. To quantify learning ability, we consider that if a model can understand the semantic information of a image well, it would have a strong learning ability. Specifically, we adopt a simple self-supervised task by first rotating each image at different angles and then forcing the model to identify which angle each image comes from. As illustrated in <cit.>, the model can effectively perform the rotation recognition task unless it first learns to recognize the object categories and then recognize semantic parts in the image. But only using the rotation task to estimate learning ability is not enough. For example,“6” is rotated 180^∘ for the digit “9” and 0^∘ for the digit “6”. Inspired by <cit.>, we also combine the original classification task and the self-supervised rotation task into a unified task, named as the self-supervised augmented task, which forces the model to identify the angle as well as the category to eliminating incorrect estimation. We consider a N-way classification task and a M-way self-supervised rotation task. The CNN student model consists of two components: the feature extractor Φ:x̃→ℝ^d and the classifier h:ℝ^d→ℝ^N, i.e., 𝒮(x̃)=h(Φ(x̃)). Here d denotes the feature dimension. we attach an auxiliary classifier c:ℝ^d→ℝ^K with parameters θ_c behind the feature extractor, where K=N*M represents the number of categories for the self-supervised augmented task. ℒ_csd is calculated as follows: ℒ_csd = CrossEntropy(k, c(Φ(trans(x̃)))), where trans(·) is the operation of rotation and k is the label of the rotated version of synthetic data x̃ in the self-supervised augmented task. For example, if the category of x̃ in the original classification task is n and the category of its rotated version in the self-supervised rotation task is m, then the category in the self-supervised augmented task is n*M+m. By enlarging ℒ_csd, we generate hard samples on which the student has difficulty understanding semantics. §.§ Knowledge Transfer In knowledge transfer stage, the main purpose is to encourage the student model to mimic behaviors of the teacher model. The vanilla KD <cit.> matches final prediction distribution of the teacher and student model by calculating the Kullback-Leibler (KL) divergence between outputs of the teacher and the student: ℒ_kd = KL(σ(𝒯(x̃)/τ), σ(𝒮(x̃)/τ)), where σ(·) is the softmax function and τ is a hyper-parameter to soften the distribution. We set τ to 20 throughout all experiments for fair comparison as CMI <cit.>. Besides prediction distribution, feature maps can also be used as valuable knowledge to effectively guide the student <cit.>. We define the Mean-Square-error (MSE) loss between teacher feature maps F_t∈ℝ^H_t*W_t*C_t and student feature maps F_s∈ℝ^H_s*W_s*C_s from the last layer as: ℒ_fea = MSE(F_t, r(F_s)), where r(·) is a projection to align the dimension of feature maps. The student is trained for n_s steps in each training round and optimized by: min_θ_sℒ_ce+ℒ_kd+β*ℒ_fea, where β is a hyper parameter to balance the three loss items, and ℒ_ce=CrossEntropy(ỹ,𝒮(x̃)) is a regular loss in the original classification task to calculate cross entropy between student outputs and pre-defined labels. Besides the student training, the auxiliary classifier is also separately trained with the following loss to improve its own evaluation capability to better help the data synthesis stage: min_θ_cℒ_csd. §.§ Training Procedure The two-stage training procedure is summarized in Algorithm <ref>. In the data synthesis stage, the random noise z and generator 𝒢 are first trained for n_g times. Then we append the new synthetic data into an image bank for preventing catastrophic forgetting <cit.>. In knowledge transfer stage, we sample data from the image bank and separately train the student 𝒮 and the auxiliary classifier c for n_s times. § EXPERIMENTS Datasets and models. We conduct experiments on SVHN <cit.>, CIFAR-10 and CIFAR-100 <cit.> datasets, following a similar training setting as <cit.>. For all datasets, various models are used, including ResNet <cit.>, WRN <cit.>, VGG <cit.> and MobileNet <cit.>. The generator architecture is the same as <cit.>. Training details. For all datasets, to prevent the student from overfitting to data generated by early training rounds <cit.>, we first synthesize some data to initialize the image bank by removing ℒ_csd and running 400 synthesis batches with each one containing 200 samples. We totally train 100 rounds (epochs). In data synthesis stage, the random noise vector and generator are updated using Adam optimizer with 1e-3 learning rate. We synthesize 200 images in each step and repeat for n_g=500 steps. The hyper-parameter α is set to 10. In knowledge transfer stage, the student and the auxiliary classifier are update using SGD optimizer with 0.1 learning rate, 0.9 momentum and 1e-4 weight decay and we adopt cosine annealing for the learning rate decay. we sample 128 images from the image bank in each step and repeat for n_s=2000 steps. The hyper-parameter β is set to 30. We set temperature τ to 20. Test accuracy is used to evaluate the proposed method. We run all experiments for three times and report the means. More implementation details and results can be found in the appendix. §.§ Comparison with DFKD methods We compare with four representative DFKD methods on five groups of teacher-student models, including three homogeneous and two heterogeneous architecture combinations. DAFL <cit.> and ZSKT <cit.> are generator-based methods. ADI <cit.> and CMI <cit.> are inversion-based methods. Table <ref> shows that our proposed CSD outperforms all other methods. We also observe that, except for CMI, other comparison methods perform poorly on heterogeneous combinations and more complex datasets. For example, in the case of “WRN-40-2 & VGG8" on CIFAR-100, the test accuracy of DFAL is only 25.24%, which do not even achieve half accuracy of the student trained on the original data (68.76%). In contrast, our proposed CSD is robust on different datasets and teacher-student combinations. §.§ Effect of Our Proposed Adversarial Loss We conduct ablation study on CIFAR-10 and CIAFR-100 to explore whether our proposed adversarial loss L_csd can help improve the student performance. As shown in Table <ref>, in the case of Baseline, i.e., removing the adversarial loss (Equation <ref>), the accuracy drops by 3.62% on CIFAR-10 (from 90.50% to 86.88%) and 3.29% on CIFAR-100 (from 60.88% to 57.59%), which demonstrates the effectiveness of our proposed ℒ_csd. To further demonstrate the superiority of our method, we compare with two alternative adversarial strategies. The first one is traditional adversarial manner as the previous work <cit.>, whose adversarial loss is to calculate the divergence between predictions of the teacher and student. We replace ℒ_csd with traditional adversarial loss L_adv = KL(σ(𝒯(x̃)/τ), σ(𝒮(x̃)/τ)) and find that it has a slight improvement of 0.65% (from 86.88% to 87.57%) compared to Baseline on CIFAR-10. Surprisingly, We observe that it even results in a large drop of 4.09% (from 57.59% to 53.5%) on the more complex CIFAR-100 dataset. This indicates that estimating the sample difficulty with teacher predictions is likely to be unreliable, which would enlarge the negative effect in the case of teacher misdirection and thus weakens the student performance. Additionally, we plot the learning curves of the student trained by different strategies. In Fig. <ref>, it is clear that ℒ_adv causes very large accuracy fluctuations across training rounds (epochs), while our CSD makes the model converge faster and more stable. The second alternative strategy is to use only the rotation task as the final task to quantify the student learning ability without containing the original classification task. So we replace ℒ_csd with self-supervised rotation loss ℒ_rotation = CrossEntropy(m,c(Φ(trans(x̃)))), where m is the label of synthetic data in the rotation task. From Table <ref>, this causes significantly performance improvement on both CIFAR-10 and CIFAR-100 compared to the traditional adversarial manner, which shows the superiority of synthesizing hard samples according to the current student learning ability. However, only rotation task may destroy the original visual semantic information on some samples (such as “6” vs “9”) and results in inaccurate ability estimation. By combining the original classification task and the self-supervised rotation task, our CSD further improves the model performance. §.§ Auxiliary Classifier Analysis Next, we explore how the structure and training strategy of the auxiliary classifier affect the final student performance. To study the effect of the auxiliary classifier structure, we attach different numbers of fully-connected layers (from 1 to 3) behind the feature extractor. In Fig. <ref>, only one fully-connected layer even has a negative impact, which reduces the student performance on CIFAR-10 and CIFAR-100 by about 3% and 5% compared to the Baseline (without ℒ_csd), while two or three fully-connected layers can achieve similarly superior performance. We conjecture that multiple layers can effectively filter out noise in feature representations to accurately estimate the student ability. Therefore, we adopt two fully-connected layers as the auxiliary classifier for all experiments to trade off between the effectiveness and complexity. To study the effect of the training strategy during the knowledge transfer stage, we conduct experiments with two different training strategies: joint training and separate training. (1) Joint training updates the parameters of the student and the auxiliary classifier simultaneously at each step, that is, change the lines 17 and 18 of the Algorithm <ref> to θ_s←θ_s-ξ∇_s(ℒ_KT+ℒ_csd) and θ_c←θ_c-ξ∇_c(ℒ_KT+ℒ_csd). This strategy requires the student to learn the self-supervised augmented task together with the original classification task. (2) Separate training is exactly our adopted strategy for CSD. At each step, we update the student parameters first and then fix it and turn to train the auxiliary classifier. Table <ref> demonstrates separate training performs better. We conjecture that the additional self-supervised auxiliary task might distract the student from the main classification task. § CONCLUSION In data-free knowledge distillation, the student model itself can act as a key contributor to synthesize more valuable data while this point is largely overlook previously. In this paper, we utilize a self-supervised augmented task to accurately estimate the current student learning ability in each training round to synthesize more valuable data rather than oversimple synthetic data. Extensive experiments are conducted on three popular datasets and various groups of teacher-student models to evaluate the performance of our proposed method, and the results demonstrates the effectiveness of our proposed CSD. A potential future work is to explore how to apply the popular diffusion models to synthetic samples for data-free knowledge distillation <cit.>. § APPENDIX §.§ Experimental Details §.§.§ Datasets We evaluate our proposed CSD on three public datasets for classification task: SVHN, CIFAR-10 and CIFAR-100. The details of these datasets are listed as follows: * SVHN <cit.>. SVHN is a dataset of street view house numbers collected by Google, and the size of each image is 32×32. It consists of over 600,000 labeled images, including 73257 training images, 26,032 testing images and 531,131 additional training images. * CIFAR-10 <cit.>. CIFAR-10 is a dataset of 32×32 colored images. It consists of 60,000 labeled images from 10 categories. Each category contains 6,000 images, which are divided into 5,000 and 1,000 for training and testing, respectively. * CIFAR-100 <cit.>. CIFAR-100 is similar but more challenging to CIFAR-10, which consists of 100 categories. Each categories contains 500 training images and 100 testing images. Note that the training set is only utilized for teacher training and is unseen for data-free knowledge distillation. However, the testing set is still used for assessment. §.§.§ Model Architectures For all datasets, three network types are used in teacher-student models: ResNet <cit.> ,WRN <cit.>, VGG <cit.> and MobileNet-V2 <cit.>. The number behind “VGG" and “ResNet" denotes the depth of the network. “WRN-n-k" denotes a residual network with n depths and widening factor k. We use the same generator architecture as the previous work <cit.>, which is detailed in Table <ref>. We set the dimension of random noise vector to 256. §.§.§ Baseline We compare with four representative data-free knowledge distillation methods: two generator-based methods (DSFL and ZSKT) and two inversion-based methods (ADI and CMI). The details of these compared methods are listed as follows: * DAFL <cit.>. DFAL is a generator-based DFKD method that introduces one-hot loss, activation loss and information entropy loss from the teacher feedback as constraints to generate data close to the original training data. * ZSKT <cit.>. ZSKT is another generator-based DFKD method that first introduces adversarial distillation. It generate hard samples on which the student poorly matches the teacher, i.e., maximizing the KL divergence between their predictions, and then use these hard samples to minimize the KL divergence in order to train the student. * ADI <cit.>. ADI is an inversion-based DFKD method that first proposes to utilize statistics stored in batch normalization layers of the teacher as image prior information. * CMI <cit.>. CMI is another inversion-based DFKD method that mainly addresses model collapse issue. It introduces a contrastive learning objective to encourage each sample to distinguish itself from others for sample diversity. §.§ Visualization We visualize synthetic images of our CSD from different training epochs in Figure <ref>. We observe that images from early training epoch are more visually discernible than images from later training epoch, which indicates that as the number of training epochs increases, the student learning ability gradually becomes stronger, leading to more difficult synthetic images. Additionally, we plot the learning curves of the auxiliary classifier during knowledge transfer in Fig. <ref>. §.§ Sensitivity Analysis To study how the hyper-parameter α affect the student final performance, we plot student accuracy curves on CIFAR-100 for WRN-40-2 & WRN-16-1 with α ranging from 2 to 20 at equal interval of 2. From Fig. <ref>, we find that our CSD outperforms the best competitor (CMI) on all values of α. §.§ RELATED WORK §.§.§ Data-Driven Knowledge Distillation Knowledge distillation (KD) is proposed to solve model compression problem by distilling knowledge from a cumbersome model (teacher) into a less-parameterized model (student). The vanilla KD <cit.> takes predictions from the last layer as the teacher knowledge to guide the student training. Besides predictions, many subsequent works excavate the knowledge in the output of intermediate layers to supervise the training of the student. The intermediate supervision can be formed by feature maps <cit.>, attention maps <cit.> or feature representation <cit.>. There are also some works for transferring knowledge in relationships between different samples or layers <cit.>. All the above mentioned methods are based on the premise that the original training data is available, while our proposed method is discussed in a more challenging scenario of no original data. §.§ Data-Free Knowledge Distillation Data-free knowledge distillation (DFKD) deals with transferring knowledge without the access to the original training data. A straightforward idea is to synthesize the original data for knowledge transfer. The approaches of data synthesis can be roughly categorized into two classes: inversion-based and generator-based approaches. Inversion-based approaches input the random Gaussian noise into the fixed teacher and update the input iteratively via the back-propogation until meeting certain constraints <cit.>. ADI <cit.> proposes to leverage information stored in the batch normalization layers of the teacher to narrow gap between synthetic data and original data. CMI <cit.> introduces contrastive learning objective to address the mode collapse issue and thus ensure sample diversity. FastDFKD <cit.> introduces a meta-synthesizer to accelerate data synthesis process and achieves 100× faster speed. Generator-based approaches adopt a learnable generator to synthesize data <cit.>. DAFL <cit.> introduce one-hot loss, activation loss and information entropy loss as the objective of synthesizing data, which are calculated according to the teacher output. PRE-DFKD <cit.> designs a Variational Autoencoder (VAE) to replay synthetic samples for preventing catastrophic forgetting without storing any data. Adversarial Distillation <cit.> focus on synthesizing hard data by enlarging the divergence between predictions of the teacher and the student, so as to narrow the information gap between the teacher and the student. However, all above methods do not properly take into account the student's current ability during data synthesis, which may lead to oversimple samples and thus limit the final student performance. IEEEbib
http://arxiv.org/abs/2307.04237v2
20230709175337
Study of exponential wormhole metric in $f(R)$ gravity
[ "Partha Pratim Nath", "Debojit Sarma" ]
gr-qc
[ "gr-qc" ]
* =18 pt 1,a]Partha Pratim Nath 2,a]Debojit Sarma [a]Department of physics, Cotton University [1][email protected] [2][email protected] Study of exponential wormhole metric in f(R) gravity [ August 12, 2023 ==================================================== In this work, we have studied an "exponential form" of spacetime metric: ds^2 = -e^-2m/rdt^2 +e^2m/rdr^2 + e^2m/r[r^2 dθ^2 + r^2 sin^2θ dϕ^2] in some of the viable f(R) gravity models, viz. exponential gravity model, Starobinsky gravity model, Tsujikawa model and Gogoi-Goswami f(R) gravity model. Here we have calculated the parameters including energy density, tangential and radial pressure for these corresponding models of f(R) gravity. Subsequently we have investigated the energy conditions viz. null energy condition(NEC), weak energy condition(WEC) and strong energy condition(SEC) for the considered models. We have also explained the suitable conditions of energy for these models by related plots. Keywords. Wormhole Geometry, Modified gravity, Energy Conditions. § INTRODUCTION Wormholes in General Relativity belong to a special class of solutions to the Einstein's Field Equations which act as tube-like or bridge-like that connects two distinct points of the same space-time of two different universes. The tubular structure is considered to be asymptotically flat on both sides. One of the main features of wormhole is the wormhole throat, which can be defined as a two dimensional hypersurface of minimal area or the point where the radius is minimum <cit.>. The concept of this bridge-like structure was first constructed by Einstein and Rosen which is known as Einstein-Rosen bridge <cit.>. They inspected the exact solution that describe the geometry of the bridge. Their solution is linked with the work of Ludwig Flamm <cit.>, who for the first time constructed the isometric embedding of Schwarzschild solution but his solutions sustained some stability problems. Hermann Weyl in 1928 <cit.> proposed a wormhole hypothesis of matter in connection with mass analysis of electromagnetic field theory. However he used the term "one-dimensional tube" instead of the term "wormhole". Later Ellis <cit.> gave another term for wormhole known as "Drainhole". Then Wheeler <cit.> named them as "Geons" and predicted the shape of the wormhole which offers a twofold space. Wheeler and Misner <cit.> coined the term "wormhole" and later his solutions were transformed into Euclidean wormholes by Hawking <cit.> and others. These theoretical objects lead to various static and non-static in proportion to the fixed or variable radius of the wormhole throat. Shortly afterward, Kar <cit.> discussed the static wormhole and inquired into their properties with examples. Kar and Sahdev <cit.> explored evolving Lorentzian wormholes. Kar wormholes have a quantum structure and connect different points of the space on the Planck Scale. All these wormholes were not stable and traversable. Traversability is also an important feature of wormhole. If anything that enters through one side of the wormhole can exit through the other, the wormhole is traversable. In order to become traversable, the wormhole should not contain a horizon, because the presence of the horizon would prevent the two-way travel through the wormhole. Morris-Thorne <cit.> gave the idea of traversable wormhole with some new concepts such as throat. They examined the static spherically symmetric wormholes by using the principles of General Relativity and introduced the fundamental theory for traversable wormhole. The energy-momentum tensor of the matter supporting such geometries, the wormhole throat necessitates the introduction of exotic matter <cit.>, which leads to the violation of the null energy conditions(NEC) and the averaged null energy conditions(ANEC) <cit.> near the throat region. Exotic matter is a form of dark energy(having an EoS with ω<-1/3),produces a repulsion. Recent observations have shown that the dark energy is solely responsible for the accelerated expansion of the universe. After that wormholes have been studied from various aspects and conditions <cit.>. Since the exotic matter is a troublesome issue and thus many justifications have been presented in favor of the violation of the energy conditions such as invoke quantum fields in curved spacetime, scalar-tensor theories <cit.> and so on. Many efforts have been made to reduce the use of exotic matter. "Volume integral quantifier" is one of the most famous proposition which quantifies the total amount of energy condition violating matter <cit.>. Further Nandi and others <cit.> improved this formulation to know the exact quantity of the exotic matter present in the given spacetime. Additionally, there have been proposals regarding confinement of exotic matter at the throat of the wormhole, viz. cut and paste method <cit.>. To avoid the energy violations, thin-shell wormholes <cit.> were studied, where ordinary matter is concentrated on the throat of the wormhole. In Recent years, wormhole solutions are developed using the background modified gravity theories such as Kaluza-Klein gravity <cit.>,Born-Infeld theory <cit.>, Brans-Dicke theory <cit.>, mimetic theories <cit.>, f(R) gravity <cit.>, Einstein-Gauss-Bonnet theory <cit.>, Einstein-Cartan theory <cit.>. It has been shown in modified theories of gravity that the matter inside the wormhole may satisfy the necessary energy conditions but the effective stress-energy tensor <cit.> containing higher order derivatives is responsible for the violation of the Null Enenrgy Condition(NEC). The modified or extended theories of gravity have proposed some logical explanation to some observational phenomena that can be hardly explained through the General Theory of Relativity. For example, dark energy <cit.>, dark matter <cit.>, massive pulsars <cit.>, super-Chandrasekhar white dwarfs <cit.> etc can be explained with the help gravity theories such as f(R) <cit.>, f(τ) <cit.>. Here R and τ being, respectively, the Ricci and torsion scalars. One of the easiest modification of the Einstein Hilbert action is the f(R) theory of gravity, in which the curvature scalar or Ricci scalar R in gravitational action is replaced by f(R), which is an arbitrary function of the Ricci scalar R <cit.>. Buchdahl <cit.> in 1970, first proposed the f(R) gravity model. The field equations obtained in f(R) theory are very complicated and have a larger set of solutions than that of General Theory of Relativity. Bertolami <cit.> and others simplified this theory which provides a coupling between matter and the function f(R) that leads towards an extra force which may explain the acceleration of the universe <cit.>. By using f(R)=R+α R^2, Starobinsky <cit.> first derived early inflamatory universe solution long before the effectuality of the inflamaton was known. The late time cosmic acceleration has been explained by Carroll et al <cit.>. in the contex of f(R) gravity. Many researchers have studied numerous viable cosmological models in f(R) gravity <cit.>. Bertplami, Sotiriou, Harko and others explored the coupling of an arbitrary function of R with the matter Lagrangian density. Limiting from strong lensing, f(R) gravity was studied in Patalini formalism by Yang and Chen <cit.>. Capozziello and Laurentis <cit.> had given a different approach to dark matter problem in the context of f(R) gravity. Bronnikov and Starobinsky <cit.> demonstrated that wormholes can not be formed in dark matter models governed by scalar tensor theory even in the presence of electric and magnetic field. Bronnikov et al. <cit.> has showed that for df/dR=F(R)<0, the non-existence of wormhole could be violated in f(R) theory of gravity. Both Brans-Dicke theory and f(R) gravity were considered to obtain wormhole. It is shown <cit.> that no vacuum wormmhole exists in Brans-Dicke theory but exists in f(R) gravity if it satisfies an extremum where the effective gravitational constant changes its sign. Bronnikov et al. <cit.> have shown a no-go theorem in General Relativity for obtaining wormhole solutions, according to which it eliminates the existene of wormholes with flat or AdS asymptotic regions on both sides of the throat where the source matter is isotropic. In f(R) gravity, Lobo and Oliveira <cit.> wormhole solutions where the matter threading the wormhole is a fluid which satisfies the energy conditions, the higher order terms of f(R) theory gives rise the energy violations. Beato et al. <cit.> have shown that exotic matter is not mandatory for constructing traversable wormhole. Similarly Harko et al. <cit.>, Pavlovic and Sossich <cit.> also shown that the f(R) theory can describe the wormhole geometry without any kind of exotic matter. Mazharimousavi and Halilsoy <cit.> also constructed traversable wormhole model that satisfies energy conditions. The exact solutions of traversable wormholes with non-constant Ricci scalar have been obtained by Golchin and Mehdizadeh <cit.>. On the other hand, Restuccia and Tello-Ortiz <cit.> have given new class of f(R) gravity model and studied cosmological parameters. Spherically symmetric Lorentzian wormholes <cit.> also have been investigated with a constant scalar curvature in quardatic f(R) gravity. De Benedictis and Horvat <cit.> showed the existence of wormhole throat in f(R) gravity and also studied their porperties. On the other hand Sharif and Zahra <cit.> investigated the wormhole solutions for isotropic, anisotropic fluids and barotropic equation of state with the radial pressure. In numerous f(R) models, wormholes have been studied using Karmarkar condition <cit.>. Many physicist studied wormholes in various f(R) gravity models using different redshift and shape functions <cit.>. Vittorio et.al. developed some astrophysical techniques to detect wormholes and in the same time to reconstruct the solution once they have been observed <cit.>. Here in this work, we investigate the so called exponential wormhole metric in f(R) gravity. For more than 60 years, this metric has been investigated by many researchers. This metric has some charming properties that it passes almost all of the standard lowest order weak field test of General Relativity, but strong field actions and medium field actions are very different. The paper is assembled as follows. In Sec.II, we have studied the exponential wormhole metric in General Relativity. Where we have studied various properties such as throat radius, Karmarkar condition, field equations, flare-out condition, Ricci convergence condition etc. In Sec.III, we have studied the exponential wormhole metric in modified f(R) gravity model. First we have constructed the field equations and with the help of these we studied the exponential wormhole metric in four viable f(R) gravity model. Finally we present results and discussion in the Sec.IV. § EXPONENTIAL WORMHOLE METRIC IN GENERAL RELATIVITY The exponential wormhole metric <cit.>, ds^2 = -e^-2m/rdt^2 +e^2m/rdr^2 + e^2m/r[r^2 dθ^2 + r^2 sin^2θ dϕ^2], has an attractive feature in weak fields <cit.>, that is when 2m/r<<1, we have ds^2=[-dt^2+dr^2+r^2(dθ^2 + sin^2θ dϕ^2)]+2m/r[dt^2+dr^2+r^2(dθ^2 + sin^2θ dϕ^2)]. i.e., g_ab=η_ab +2m/rδ_ab. As this matches with the lowest order field expansion, so the exponential metric will pass all the lowest order weak field test of General Relativity. But strong field and medium field behaviour are very different <cit.>. §.§ Wormhole throat In order to find the radius of the throat of the wormhole, let us consider the area of the spherical surface, S(r)= 4π r^2 e^2m/r. For the extremum value of S(r), d S(r)/dr=8π (r-m)e^2m/r=0, which gives r=m, which is the radius of the throat. Because at r=m, we find that d^2 S(r)/dr^2= 8π e^2>0, i.e., the area has a minimum at r=m. Again, all the metric components are finite and the diagonal components are non-zero at the throat (i.e. at r=m). §.§ Karmarkar Condition For a static and spherically symmetric line element to be class one, Karmarkar <cit.> developed a mandatory condition. For the exponential metric, some of Riemann curvature tensors are, R_1414= 2m(m-r)e^-2m/r/r^4 ; R_1212=-me^2m/r/r; R_1224=R_1334=0; R_3434=-m(m-r)sin^2 θ e^-2m/r/r^2; R_2323=-m(m-2r)sin^2 θ e^-2m/r, These Riemann components fulfilling the well known Karmarkar relation, R_1414=R_1212R_3434+R_1224R_1334/R_2323 with R_2323≠ 0. The spacetime, that satisfies the Karmarkar condition is known as embedding class one. Now, by substituting the non-zero Riemann components in the above relation, we get m^2(2m-3r)(m-r)sin^2θ/r^4=0. Solving the above relation, we will get m=0 or m=r or m=3/2r. We have found that the throat radius is r=m. That means the exponential wormhole metric fulfills the requirement of class one at or near the throat. But m=0 is forbidden due to the flare out condition. §.§ Field equations For the exponential metric, the explicit forms of the non-zero Einstein tensor components are, G^r_r=-G^t_t=-G^θ_θ=-G^ϕ_ϕ=-m^2 e^-2m/r/r^4. In the units of c=1, G=1, it leads to, ρ(r)=p_r(r)=-p_t(r)=-m^2 e^-2m/r/r^4. Where ρ, p_r and p_t stand for energy density, radial and tangential pressure respectively. From the above relation we can see that, ρ+p_r=-2 m^2 e^-2m/r/r^4<0, ρ + p_t=0 and ρ + p_r+ 2 p_t=0 At the throat, the above takes the value, ρ+p_r= -2/(me)^2<0;ρ + p_t=0;ρ + p_r+ 2 p_t=0 and ρ =-1/(me)^2<0. In terms of the principal pessures, the energy conditions are given as, Null Energy Condition(NEC): ρ+p_r≥0, ρ+p_t≥ 0 Weak Energy Condition(WEC): ρ≥0, ρ+p_r≥0, ρ+p_t≥ 0 Strong Energy Condition(SEC): ρ≥0, ρ+p_t≥0, ρ+p_r+2 p_t≥0 It is seen that the exponential wormhole metric partially violates all the energy conditions, specially at or near the throat. Which can be more clearly visible from the FIG(<ref>). §.§ Flare-out condition The flare-out condition is more understandable through the embedding geometry. The embedded spacetime at t=constant and θ=π/2 for the exponential wormhole metric is given by, ds^2_e= e^2m/r[dr^2+ r^2 dϕ^2] In three dimensional Euclidean space the embedded surface has equation z=z(r), so that the metric of the surface can be written as, ds^2_e=[1+(dz/dr)^2]dr^2+e^2m/rr^2 dϕ^2 Comparing the relations Eq.(<ref>) and Eq.(<ref>), we get dz/dr=±(e^2m/r-1)^1/2. Here, we observe that dz/dr→ 0 as r→∞. Which implies that the space is asymptotically flat <cit.>. Now, the flare-out condition is given by the minimality of the wormhole throat as, d/dz(dr/dz)=m e^2m/r/r^2(e^2m/r-1)^2>0 i.e., m>0. Again, for the exponential wormhole metric, surface tension τ is given as <cit.>, τ=m^2 e^-2m/r/r^4 Usually, the exoticity function ζ is used for the flare-out condition, which is given as, ζ=τ-ρ/|ρ|>0. For the exponential wormhole metric, the value of ζ comes out as, ζ=τ-ρ/|ρ|=2>0 So, for the exponential wormhole metric (τ-ρ)>0 everywhere, i.e. the metric obeys flare-out conditions everywhere. It was assumed that the wormhole should have a large surface tension compared to the energy density to continue the geometry. This condition seems to be physically reasonable. This violates the Weak Energy Condition(WEC)or averaged Weak Energy Condition to minimize the use of exotic matter. §.§ Curvature tensor The non-zero components of Riemann tensor, Rici tensor, Ricci curvature scalar, Kretschmann scalar and other related scalars for the exponential metric are, R^tr_tr=-2R^tθ_tθ=-2R^tϕ_tϕ=2me^-2m/r(r-m)/r^4, R^rθ_rθ=R^rϕ_rϕ=-m/r^3e^-2m/r, R^θϕ_θϕ=m(2r-m)/r^4e^-2m/r, R^a_b=-2m^2/r^4e^-2m/r diag(0,1,0,0)^a_b, R=-2m^2/r^4e^-2m/r, R_abcdR^abcd=4m^2(12r^2-16mr+7m^2)/r^8e^-2m/r, C_abcdC^abcd=16m^2(3r-2m)/3r^8e^-2m/r, R_abR^ab=4m^4/r^8e^-2m/r The non zero Electric parts of the Weyl tensors are, E_θθ=E_ϕϕ=-2E_rr=2m(r-m)/3r^4e^-2m/r, E_tt=-m(m+2r)/3r^4e^-6m/r, E_abE^ab=m^2(-4mr(2+7r^4)+m^2(4+17r^4)+4(r^2+5r^6)+4(m-r)^2 ^4θ/9r^12e^-8m/r All these components are finite and they donot diverge at r=0 and r=m, they decreases to zero both as r→∞ and as r→ 0. So we can say that the exponential wormhole metric doesnot contain any kind of Weyl and Oscillating Ricci singularity[ref]. The four curvature invariants viz. Ricci scalar, the first two Ricci invariants and the real component of the Weyl invariant are <cit.>, R= -2m^2/r^4e^-2m/r, r_1=1/4S^b_a S^a_b =3m^4/4r^8e^-4m/r, r_2=-1/8S^b_a S^a_c S^c_b=3m^6/8r^12e^-6m/r and ω_2=-1/8C̅_abcdC̅^abefC̅^cd_ef=-32m^3(2m-3r)^3/9r^12e^-2m/r. Where S_ab=R_ab-1/4g_abR. When the curvature invariants are plotted FIG(<ref>), it is seen that they are all nonzero and depend only on the radial coordinate r which indicates spherical symmetry. Again they are finite at r=m(at the throat) and decay to zero as r→∞. R and ω_2 have a minima near the throat whereas r_1 and r_2 have a maxima. These plots are finite everywhere indicating the absence of horizon. So we can conclude that the exponential wormhole metric represents a traversable wormhole. §.§ Ricci convergence Any Lorentzian spacetime is said to fulfil the timelike, null and spacelike Ricci convergence condition if for all timelike, null or spacelike vectors t^a one has <cit.>, R_abt^at^b ≥ 0 Now, for the exponential wormhole metric, we can see that, R_ab=-2m^2/r^4diag(0,1,0,0)_ab. So the Ricci convergence condition leads to, R_abt^at^b=-2m^2/r^4(t^r)^2 ≤ 0. So, the exponential wormhole metric violates the null Ricci convergence condition for all timelike, null and spacelike vectors. Again we can show that, R_ab=-2m^2/r^4diag(0,1,0,0)_ab=-1/2∇_a(2m/r)∇_b(2m/r)=-1/2∇_aΦ∇_bΦ. and G_ab=-1/2[∇_aΦ∇_bΦ-1/2g_ab(g^cd∇_c Φ∇_d Φ)] i.e. Einstein equation for a negative kinetic energy massless scalar field, a ghost or phantom field. The contracted Bianchi identity G^ab_;b gives the scalar field equation of motion (g^ab∇_a ∇_b)Φ=0, which indicates that the exponential wormhole metric represents a traversable wormhole metric <cit.>. § EXPONENTIAL WORMHOLE METRIC IN F(R) GRAVITY The gravitational action for f(R) gravity can be defined as, S=1/2k∫ [f(R)+L_m]√(-g)d^4x, where k=8π G, L_m and g stand for the matter Lagrangian density and the determinant of the metric g_μν respectively. Here, for simplicity, we will consider k as unity. Now varying the Eq.(<ref>) with respect to the metric g_μν gives the field equations as, FR_μν-1/2fg_μν-∇_μ∇_νF+ F g_μν=T^m_μν, where R_μν represents Ricci tensor and F=df/dR. Now we can consider the contraction of Eq.(<ref>) to obtain the relation, FR-2f+3 F=T. Where R=g^μνR_μν and T=g^μνT_μν represent Ricci scalar and trace of stress energy tensor respectively. Combining the Eqs.(<ref>) and Eq.(<ref>), the effective field equations are calculated as, G_μν≡ R_μν-1/2Rg_μν=T^eff_μν, with T^eff_μν=T^c_μν+T^m_μν/F, where T^c_μν=1/F[∇_μ∇_ν F-1/4(FR+ F+T)]. The energy momentum tensor for the matter source of the wormholes is T_μν=∂ L_m/∂ g^μν, which is defined as, T_μν=(ρ+p_t)u_μ u_ν-p_t g_μν+(p_r-p_t)X_μ X_ν, such that u^μu_ν=-1 and X^μX_ν=1, where u_μ is the four velocity and X_μ is the unit space-like vector. Again ρ, p_r and p_t are energy density, radial pressure and tangential pressure respectively. Now Einstein's field equation for the metric Eq.(<ref>) in f(R) gravity can be solved as, ρ=-e^-2m/r(e^2m/rHr^4+m^2 F(r)+mr^2F^'(r))/r^4, p_r=e^-2m/r(e^2m/rHr^4-m^2F(r)+mr^2 F^'(r)+r^4 F^''(r))/r^4 and p_t=e^2m/r(e^2m/rHr^4+m^2F(r)-mr^2F^'(r)+r^3F^'(r))/r^4 where, H=1/4(FR+ F+T) and F^'=dF(r)/dr and F^''=d^2 F(r)/dr^2. Bronnikov and Starobinsky <cit.> considered the stability condition for wormhole geometry which is free from ghosts. They showed that no realistic wormhole can be constructed in scalar-tensor models for a positive scalar function. In f(R) gravity, the non-existence of wormhole could be disobeyed if df/dR=F(R) is negative <cit.>. In agreement with classical General Theory of Relativity, the violation of Null Energy Condition(NEC) denoted as ρ+p_r ≥ 0, ρ+p_t ≥ 0 and Weak Energy Condition (WEC) denoted as ρ≥ 0, ρ+p_r ≥ 0, ρ+p_t ≥ 0 are mainly due to the presence of exotic matter. The wormhole throat mainly doesnot respect the NEC <cit.>. In order to check the the necessary NEC, we simplify our calculations for ρ+p_r and ρ+p_t. For the exponential wormhole metric, to obey the NEC, ρ+p_r=e^-2m/r(-2m^2 F(r)+r^4 F^''(r))/r^4 and ρ+p_t= e^-2m/r(-2m+r)F^'(r)/r^2 should be positive at the throat. In the next section, we will discuss exponential wormhole solutions under the influence of four different viable f(R) gravity models. §.§ The Exponential Gravity Model The exponential gravity model was introduced and investigated by Cognola <cit.>. This model can describe the inflation of early universe and accelerated expansion of the current universe. The exponential model is defined as, f(R)= R-μ R_0[1-e^-R/R_0] where μ and R_0 are arbitrary constants. Now the equations Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>) reduces to, ρ= e^-4m/r/r^8 R_0[-e^2m/rr^4(m^2+e^2m/rHr^4)R_0+e^2m^2 e^-2m/r/r^4 R_0m^2(4m(m-2r)+e^2m/rr^4 R_0)μ], p_r= e^-6m/r/r^12 R_0^2[-16 e^2m^2 e^-2m/r/r^4 R_0m^4(m-2r)^2μ -r^4 R_0[e^4m/rr^4[-m^2+Hr^4 e^2m/r]R_0+ e^2m(r^3+m e^-2m/r/R_0)/r^4m^2 [-12m^2+48mr-40r^2+r^4 R_0 e^2m/r]μ]] and p_t= e^-4m/r/r^8 R_0[e^2m/rr^4(m^2+e^2m/rHr^4)R_0+e^2m^2 e^-2m/r/r^4 R_0m^2(4m^2-12mr+8r^2-e^2m/rr^4 R_0)μ] Here we use m=1 and evaluate the graphical behaviour of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR From the graph we observe that, * If μ=+ve, R_0=-ve, then ρ+p_r ≥ 0 * If μ=+ve, R_0=-ve or μ = -ve, R_0= +ve, then ρ+p_t ≥ 0 * ρ≤ 0 for all combinations of values of μ and R_0 * ρ+p_r+2 p_t ≥ 0 for all combinations of value of μ and R_0. * df/dR>0 for μ=-ve, R_0=+ve or μ=-ve, R_0=-ve and df/dR<0 for μ=+ve, R_0=-ve or μ=+ve,R_0=+ve. So we can conclude that if μ = +ve, R_0= -ve, the necessary NEC is respected throughout the wormhole geometry and F=df/dR<0, but WEC and SEC is partially violated. So for this combination of μ and R_0, we get wormhole solution which violates the non-existence theorem with the presence of negligible amount of exotic matter. Whereas in the case of General Relativity, NEC is violated by exponetial wormhole metric. §.§ Starobinsky f(R) gravity model This model was proposed by Satrobinsky <cit.> which is one of the most recognized f(R) gravity model. It is consistent with cosmological conditions and satisfies solar system and laboratory tests. Starobinsky model is given as, f(R)=R+a R_0[(1+R^2/R_0^2)^-l-1], where a, R_0 and l are free parameters. The field equations Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>) of the exponential wormhole metric in f(R) gravity model reduce as, ρ= -H+m^2 [-e^-2m/r/r^4+ 4alm(1+4m^4 e^-4m/r/R_0^2 r^8)^-lR_0(4m^4(m+4lm-4(r+2lr))+e^4m/r(-3m+4r)r^8R_0^2)/(4m^4+ R_0^2 r^8 e^4m/r)^2], p_r= 1/r^4(4m^4+r^8 R_0^2 e^4m/r)^3 e^-2m/r[1+4m^4 e^-4m/r/r^8 R_0^2]^-l[64a e^2m/rlm^12r^4 R_0+768 e^2m/rl^2 m^12r^4 R_0+ 1024a e^2m/rl^3m^12r^4 R_0-512ae^2m/rlm^11r^5R_0-3072ae^2m/rl^2 m^11r^5R_0-4096ae^2m/rl^3 m^11r^5R_0 +768ae^2m/rlm^10r^6R_0+3584ae^2m/rl^2m^10r^6R_0+4096ae^2m/rl^3m^10r^6R_0-416ae^2m/rlm^8 r^12R_0^3 -576ae^6m/rl^2m^8r^12R_0^3+1536ae^6m/rlm^7r^13R_0^3+2304ae^6m/rl^2m^7r^13R_0^3-1536ae^2m/rlm^6r^14R_0^3 -2176ae^6m/rl^2m^6r^14R_0^3+20ae^10m/rlm^4r^20R_0^5-96ae^10m/rkm^3r^21R_0^5+80ae^10m/rlm^2r^22R_0^5 -(m^2-Hr^4e^2m/r)(1+4m^4e^-4m/r/r^8R_0^2)^l(4m^4+r^8R_0^2 e^4m/r)^3] and p_t= H+m^2[e^-2m/r/r^4+ 1/(4m^4+r^8 R_0^2e^4m/r)^2[4al(1+4m^4 e^-4m/r/r^8R_0^2)^-lR_0(4m^4((3+4l)m^2+4(1+2l)r^2-6m(r+ 2lr))-e^4m/rr^8(m^2-6mr+4r^2)R_0^2)]] After plotting the graphs FIG(<ref>) and FIG(<ref>) of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR, we get the following analysis, If l=+ve * If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_r ≥ 0 (for r>1.2). * If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_t ≥ 0. * ρ≤ 0 for all combinations of a and R_0. * ρ+p_r+2p_t ≥ 0 for all combinations of a and R_0. * df/dR>0 for all combinations of a and R_0. If l=-ve * If a=-ve, R_0=+ve or a=+ve, R_0=-ve, then ρ+p_r ≥ 0 (for r>1.13). * If a=-ve, R_0=+ve or a=+ve, R_0=-ve, then ρ+p_r ≥ 0. * ρ≤ 0 for all combinations of a and R_0. * ρ+p_r+2p_t ≥ 0 for all combinations of a and R_0. * df/dR>0 for all combinations of a and R_0. So we conclude that, if l=+ve, a=+ve, R_0=+ve or l=+ve, a=-ve, R_0=-ve (for r>1.2) and l=-ve, a=-ve, R_0=+ve or l=-ve, a=+ve, R_0=-ve (for r>1.33), then NEC is respected throughout the geometry. But NEC is partially violated at the throat, that is to show the feasible traversable wormhole structure which have small amount of exotic matter at the throat of the wormhole. Again F=df/dR>0 represents the non-spherically symmetric wormhole solution. §.§ Tsujikawa f(R) gravity model This model was represented by Tsujikawa <cit.> and it is defined as, f(R)=R-μ R_0 tanh[R/R_0], where μ and R_0 are arbitrary constants. The field equations Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>) now reduce to, ρ= e^-4m/r/r^8 R_0[-e^2m/rr^4[m^2+Hr^4 e^2m/r]R_0+m^2 μ[2m^2 e^-2m/r/r^4 R_0]^2[r^4 R_0 e^2m/r- 8m(m-2r)tanh[2m^2 e^-2m/r/r^4 R_0]]], p_r= e^-6m/r/r^12R_0^2[e^4m/rr^8[-m^2+ e^2m/rHr^4]R_0^2+m^2 μ[2m^2 e^-2m/r/r^4 R_0]^2[e^4m/rr^8 R_0^2+32m^2(m-2r)^2 [-2+3[2m^2 e^-2m/r/r^4 R_0]^2+8e^2m/rr^4(3m^2-12mr+10r^2)R_0 tanh[2m^2 e^-2m/r/r^4 R_0]]] and p_t= e^-4m/r/r^8 R_0[e^2m/rr^4[m^2+Hr^4 e^2m/r]R_0+m^2 μ[2m^2 e^-2m/r/r^4 R_0]^2[-e^2m/rr^4 R_0- 8(m-2r)(m-r)tanh[2m^2 e^-2m/r/r^4 R_0]]] Tsujikawa described that μ∈ (0.905,1) <cit.> to sustain the viability of the model. Whereas for the violation of non-existence theorem of static spherically symmetric wormhole F=df/dR>0. We evaluated the geometric nature of wormhole structure through energy conditions for μ=1.0135. From the following graphs FIG(<ref>) of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR, we get the analysis as, * If μ=+ve, R_0=+ve or μ=+ve, R_0=-ve, then ρ+p_r ≥ 0. * If μ=+ve, R_0=+ve or μ=+ve, R_0=-ve, then ρ+p_t ≥ 0. * ρ≤ for all combinations of μ and R_0. * ρ+p_r+2p_t ≥ 0 for all combinations of μ and R_0. * df/dR>0 for μ=-ve, R_0=+ve or μ=-ve,R_0=-ve and df/dR<0 for μ=+ve, R_0=+ve or μ=+ve, R_0=-ve. We can conclude that, for μ=+ve, R_0=+ve or μ=+ve, R_0=-ve the NEC is respected by the exponential wormhole metric and F=df/dR<0, but at the same time WEC and SEC is partially violated as ρ<0. So for these particular combinations of μ and R_0, we get the wormhole solution which violates the non-existence theorem with the presence of minimal amount ofexotic matter. §.§ Gogoi-Goswami f(R) gravity model It is a new viable f(R) gravity model, constructed by Gogoi and Goswami <cit.>. This model is defined as, f(R)=R-a/πR_0 ^-1(R_0^2/R^2)-μ R_0[1-e^-R/R_0], where a and μ are two dimensionless constants and R_0 is a characteristic curvature constant having dimensions same as curvature scalar R. The allowed range for a is -1.68381<a<0.367545. Now from the plots FIG(<ref>) and FIG(<ref>) of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR, we get the following analysis, If μ =+ve * If a=-ve, R_0=-ve, then ρ+p_r ≥ 0 (also for a=+ve, R_0=+ve if r>1.55). * If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_t ≥ 0. * ρ≤ 0, for all combinations of a and R_0. * ρ+ p_r+2p_t ≥ 0, for all combinations of a and R_0. * df/dR>0 for a=+ve, R_0=+ve or a=-ve, R_0=-ve and df/dR<0 for a=+ve, R_0=-ve or a=-ve, R_0=+ve. If μ =-ve * ρ+p_r ≤ 0, for all combinations of a and R_0. * If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_t ≥ 0. * ρ≤ 0, for all combinations of a and R_0. * ρ+ p_r+2p_t ≥ 0, for all combinations of a and R_0. * df/dR>0 for all combinations of a and R_0. From these we can conclude that, NEC is respected in the exponential wormhole geometry only if μ=+ve, a=-ve, R_0=-ve. But the WEC and the SEC are partially violated throughout the space-time, which represents that the wormhole structure has a normal matter at the throat. Again F=df/dR>0, indicates the non-spherical symmetry of the wormhole solution. Similar results can be achieved for μ=+ve, a=+ve, R_0=+ve but only if r>1.55. That is for this specific combination, NEC is violated at the throat which shows that the wormhole contains a small amount of exotic matter near the throat. § RESULTS AND DISCUSSION In this paper, we have carried out a comparative study of the so called "exponential" wormhole metric in General Relativity and modified f(R) theory of gravity. We have constructed the field equations for this exponential metric for both the cases. In recent years, many researchers studied the Morris-Thorne wormhole with different redshift and shape function in various viable modified theory of gravity, but no one has ever studied the exponential wormhole metric in f(R) gravity. The radius of the throat comes out as r=m, all the metric components are finite and the diagonal components are non-zero at r=m. In order to be class one (according to Karamarkar condition), m can take values m=0 or m=r or m=3/2r. Out of which m=0 is forbidden due to the flare-out condition. Again m=r represents the throat radius, so the exponential metric acts as a class one static and spherically symmetric line element at the throat. We have also studied this exponential wormhole metric in four viable f(R) gravity model, namely exponential model, Starobinsky model, Tsujikawa model and Gogoi-Goswami f(R) gravity model. The results obtained from the study in General Relativity and those viable f(R) gravity models are as follows, in General Relativity, * From the field equations in General Relativity, we have examined the energy conditions and it is found that ρ+p_r<0, ρ+p_t=0, ρ<0 and ρ+p_r+2p_t=0, that is the NEC, WEC and SEC are violated throughout the space-time indicating the presence of exotic matter. * The exponential wormhole metric obeys the flare-out condition everywhere and it doesnot possess any kind of singularity. * All the curvature components and the scalar invariants are finite everywhere, they are finite at the throat and decay to zero as r→∞ and as r→ 0. * The exponential wormhole metric violates the null Ricci Convergence condition which is important for the better understanding of the flare-out conditions. * The exponetial wormhole metric gives the scalar field equation of motion (g^ab∇_a ∇_b)Φ=0 and we can derive Einstein equation for a negative kinetic energy massless scalar field or phantom field, which is a evidence that the exponential wormhole metric represents a traversable wormhole In modified f(R) theory of gravity, we have obtained the field equations and also studied the energy conditions in four viable f(R) gravity model. The function f(R) has some free parameters or constants in these viable models. Some specific combinations of these parameters/constants show some interesting results in energy conditions which are quite different from those in General Relativity. * In case of Exponential f(R) gravity model, if we consider μ=+ve, R_0=-ve, then ρ+p_r ≥ 0, ρ+p_t ≥ 0 and ρ+p_r+2p_t ≥ 0 (all possible combinations) but ρ≤ 0 (for all possible combinations). We can conclude that in case of exponential f(R) gravity model, the exponential wormhole metric obeys the necessary NEC and F=df/dR<0 but partially violates the WEC and SEC. So μ=+ve, R_0=+ve is the perfect combination in order to get wormhole solution which violates the non-existence theorem with the presence of insignificant amount of exotic matter. * In case of Starobinsky f(R) gravity model, if l=+ve, a=+ve, R_0=+ve or l=+ve, a=-ve, R_0=-ve (for r>1.2) or l=-ve, a=-ve, R_0=+ve or l=-ve, a=+ve, R_0=-ve (for r>1.33), then ρ+p_r ≥ 0, ρ+p_t ≥ 0, ρ+p_r+2 p_t ≥ 0 (for all possible combinations) but ρ≤ 0 (for all possible combinations). So for these combinations of l, a and R_0, NEC is respected outside the throat and F=df/dR>0, which signifies that the wormhole has a non-spherical symmetry and the throat is filled with a small amount of exotic matter. * In case of Tsujikawa f(R) gravity model, if μ=+ve, R_0=+ve or μ=+ve, R_0=-ve, then ρ+p_r ≥ 0, F=df/dR<0 and ρ+p_t≥ 0, ρ+p_r+2p_t ≥ 0 (for all possible combinations) but for all possible combinations of μ and R_0, ρ≤ 0. While WEC and SEC is violated throughout the space-time. So μ=+ve, R_0=+ve or μ=+ve, R_0=-ve are the perfect combinations to get traversable wormhole in Tsujikawa f(R) gravity model with negligible amount of exotic matter. * In case of Gogoi-Goswami f(R) gravity model, ρ+p_r and ρ+p_t≥0 for μ=+ve, a=-ve, R_0=-ve. But ρ+p_r+2p_t ≥ 0 and ρ≤ 0 for all possible combinations of μ, a and R_0. So NEC is respected for μ=+ve, a=-ve, R_0=-ve, while WEC and SEC are violated for all possible combinations. Again for these specific combinations of μ, a and R_0, F=df/dR>0, which implies the non-spherical symmetry of the wormhole with the normal matter present at the throat. Again if μ=+ve, a=+ve, R_0=+ve, the wormhole again has non-spherical symmetry but this time the throat contains the exotic matter. While the NEC is respected just outside the wormhole throat. Thus in comparison with General Relativity, the exponential wormhole metric obeys the necessary NEC at the throat in modified f(R) gravity model for some particular combinations of the free parameters/constants. So the exponential wormhole metric could form a traversable wormhole geometry with negligible amount of exotic matter. § ACKNOWLEDGEMENT This work is supported by University Grants Commission, Ministry of Education, Govt. of India(NFOBC No.F. 82-44/2020(SA-III)) under the scheme NFOBC programme. ieeetr
http://arxiv.org/abs/2307.03914v1
20230708062942
Mixed Precision Iterative Refinement with Adaptive Precision Sparse Approximate Inverse Preconditioning
[ "Noaman Khan", "Erin Carson" ]
math.NA
[ "math.NA", "cs.NA" ]
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing Jiangfeng Du August 12, 2023 ======================================================================================= Hardware trends have motivated the development of mixed precision algorithms in numerical linear algebra, which aim to decrease runtime while maintaining acceptable accuracy. One recent development is the development of an adaptive precision sparse matrix-vector produce routine, which may be used to accelerate the solution of sparse linear systems by iterative methods. This approach is also applicable to the application of inexact preconditioners, such as sparse approximate inverse preconditioners used in Krylov subspace methods. In this work, we develop an adaptive precision sparse approximate inverse preconditioner and demonstrate its use within a five-precision GMRES-based iterative refinement method. We call this algorithm variant BSPAI-GMRES-IR. We then analyze the conditions for the convergence of BSPAI-GMRES-IR, and determine settings under which BSPAI-GMRES-IR will produce similar backward and forward errors as the existing SPAI-GMRES-IR method, the latter of which does not use adaptive precision in preconditioning. Our numerical experiments show that this approach can potentially lead to a reduction in the cost of storing and applying sparse approximate inverse preconditioners, although a significant reduction in cost may comes at the expense of increasing the number of GMRES iterations required for convergence. § INTRODUCTION We consider the problem of solving large, sparse linear systems Ax=b using iterative methods, where A is a nonsingular n× n matrix. In recent years, the emergence of low precision, such as half precision, on modern hardware has received renewed attention. Lower precision has many benefits, including a reduction in computation, storage, and data movement costs. However, with fewer bits, we have greater round off error and a smaller range of re-presentable numbers. This has motivated the development of mixed precision algorithms, in which lower and higher precisions are used selectively in order to improve the performance, memory, and energy consumption without sacrificing accuracy; for details, see the recent surveys <cit.>. Iterative refinement (IR) is a long-standing technique for iteratively improving the solution to a linear system. The idea of iterative refinement is to first compute an initial solution x_0 to Ax=b , often using a direct solver like LU factorization. The refinement steps in iterative refinement consist of computing the residual r_i=b-Ax_i, solving the correction equation Ad_i=r_i, and updating the solution x_i+1=x_i+d_i. In the case that LU factorization is used for computing the initial solution, the LU factors can be reused for solving the correction term d_i. This is what we refer to as “standard IR” (SIR). Iterative refinement was originally proposed by Wilkinson in 1948, who suggested performing all the computations in a working precision denoted by u except the residual computation in precision u^2. This variant has been analyzed by Wilkinson <cit.> and Moler <cit.>. In 1977, Jankowski and Woźniakowski <cit.> and Skeel <cit.> introduced fixed precision iterative refinement, performing all computations in precision u. Langou et al. in 2006 used single precision in the computation of the LU factorization, which can be as twice faster as double precision, and a working precision in other parts of the computation <cit.>. The availability of half precision in modern GPUs motivated the development of iterative refinement which uses three or more hardware precisions. Carson and Higham in 2018 proposed an iterative refinement scheme that uses three different precisions u_f, u, and u_r, which denote the factorization, working, and the residual precisions respectively; for an explanation see <cit.>. The authors also proposed a fourth precision, called the “effective precision”, denoted by u_s, which allows for general solvers to be used for the correction term d_i. For example, in standard iterative refinement, the LU factors computed in precision u_f results in u_s = u_f. With u_f ≥ u and u_r ≤ u^2, then the relative forward and backward errors will converge to level u when κ_∞(A)≤ u_f^-1, where κ_∞(A)=‖ A^-1‖_∞‖ A‖_∞ denotes the infinity-norm condition number of A. In <cit.>, the authors develop a GMRES-based iterative refinement algorithm (GMRES-IR) which uses the computed LU factors as preconditioners within GMRES to solve for the correction in each refinement step. Under the assumption that GMRES is executed in the working precision u, with matrix vector product and with preconditioned matrix computed in double the working precision, u_s = u, and thus GMRES-IR is guaranteed to produce forward and backward errors to the working precision for more ill-conditioned problems than standard iterative refinement. Assuming that u_f ≥ u and u_r ≤ u^2 a relative forward and backward errors to the level u is obtained for κ_∞(A)≤ u^-1/2u_f^-1. From a performance perspective, the requirement that the preconditioned matrix is applied in double the working precision is not attractive. In 2021, Amestoy et al. <cit.> proposed and analyzed a five-precision variant of GMRES-IR which, in addition to the working precision u, factorization precision u_f, and residual precision u_r, added two more precisions, namely u_g for the working precision within GMRES and u_p for precision in which the preconditioned matrix is applied to a vector within GMRES. The variant with setting u=u_g=u_p is used commonly in practice, although it is guaranteed to converge for a smaller range of condition numbers than the algorithm in <cit.>. Again assuming u_f ≥ u and u_r ≤ u^2, the relative forward and backward error to the level working precision is obtained for the matrices having κ_∞(A) ≤ u^-1/3u_f^-2/3, although this restriction is likely overly pessimistic in practice. Most existing analyses of GMRES-based iterative refinement schemes assume that an LU factorization is computed for use as a left preconditioner within GMRES in each refinement step. But when A is very sparse, the performance of this approach may not be attractive since the LU factorization of A may have considerable fill-in. In practice, inexact preconditioners are often used, such as incomplete LU factorizations or sparse approximate inverses (SPAI). Using SPAI has an advantage because it is, in theory, highly parallelizable, as each column can be computed independently, and its application involves only a sparse matrix-vector product (SpMV). In <cit.>, the authors propose a new variant called SPAI-GMRES-IR which, instead of LU factors, uses a sparse approximate inverse preconditioner (computed in a precision u_f with a given accuracy threshold ε, which controls the residual in each column) as a preconditioner within five-precision GMRES-IR. The analysis of SPAI-GMRES-IR shows that as long as ε and u_f satisfy the constraints u_f_2(A^T) ≲ε≲ u^-1/2κ_∞(A)^-1/2, then the constraints on condition number for forward error and backward error to converge are the same as for five-precision GMRES-IR with the full LU factors, although it is clear that convergence of the GMRES solves may be slower. In 2022, Graillat et al. proposed an adaptive, mixed precision algorithm for computing sparse matrix-vector products that adaptively selects the precision in which each matrix element is stored and applied by splitting them into buckets based on their magnitude and then using progressively lower precisions for the buckets with smaller elements <cit.>. In this work, we apply the idea proposed in <cit.> to the application of the computed SPAI M within SPAI-GMRES-IR. We call this approach BSPAI-GMRES-IR, where the `B' stands for `bucketed'; the components of M are split into different buckets, with a different precision associated with each bucket. In Section <ref> we give background on SPAI preconditioners and the adaptive precision sparse matrix-vector product approach in <cit.>, and discuss bucketed SPAI and recent related approaches. In Section <ref>, we analyze under which conditions the BSPAI-GMRES-IR will converge and bound the forward and backward errors. In Section <ref> we perform a set of numerical experiments which illustrate the behavior of BSPAI-GMRES-IR. In Section <ref> we conclude and discuss future work. § BACKGROUND §.§ Notation First we mention some notation which will be used in rest of the text. Important for us will be the condition numbers. For a given matrix A, and a vector x, and a norm p, we define κ_p(A) = ‖ A^-1‖_p‖ A‖_p,_p(A) = ‖ |A^-1||A|‖_p,_p(A,x) = ‖ |A^-1||A||x|‖_p/‖ x ‖_p, where |A|=(|a_ij|). In case p is not specified we assume the norm to be infinity. For unit roundoffs we will use the notation u and subscripts on u to distinguish various precisions. For rounding error analysis, we will use the notation γ_k = ku/1-ku, γ̃_k=cku/1-cku, where c is a small constant independent of problem dimension. A superscript on γ indicates that the corresponding u has that superscript as a subscript; for example, γ_k^f = ku_f/(1-ku_f). The quantities computed in finite precision will be denoted by hats. §.§ Sparse Approximate Inverse Preconditioners Sparse approximate inverse preconditioning is based on the idea of explicitly constructing a matrix M≈ A^-1. Although SPAI is a general algebraic preconditioning technique and is thus not expected to be effective for every problem, the use of SPAI-type preconditioners within Krylov subspace methods has the advantage that the application of the preconditioner involves only matrix-vector products, unlike, e.g., LU-based preconditioners which require two triangular solves. There are many potential techniques for computing a sparse approximate inverse M; see the survey <cit.>. A popular approach based on Frobenius norm minimization produces a sparse approximate inverse in unfactored form (i.e., a single matrix M), in which M is computed as the solution to min_𝒥∈𝒮‖ I-AM‖_F, where 𝒥∈𝔹^n× n is a prescribed binary sparsity pattern in the set of all possible binary sparsity patterns 𝒮∈𝔹^n× n. The benefit is that we can decouple this minimization problem as min_𝒥∈𝒮‖ I-AM‖_F^2 = ∑_k=1^n min_𝒥_k∈𝒮_k‖ e_k-Am_k‖_2^2, where 𝒥_k, m_k, and e_k represent the kth columns of 𝒥, M, and I, respectively. The computed M is then reduced to solving a linear least squares problem for each column m_k of M. From a performance point of view, the benefit is that these linear least squares problems are solved independently and in parallel. Early works based on this approach used a fixed prescribed sparsity pattern 𝒥. The set 𝒥_k extracts column indices of A that are relevant for solving for a column m_k. The nonzero rows of the submatrix A(:, 𝒥_k) are represented by the so-called “shadow” of 𝒥_k, ℐ_k = { i∈{1,…, n}: ∑_j∈𝒥_k |a_ij|≠ 0}, where a_ij is the (i,j) entry of A. Thus each term in the summation on the right in (<ref>) can be reduced to min_𝒥(m̅_k) = 𝒥_k‖e̅_k - A̅_k m̅_k ‖_2, where A̅_k = A(ℐ_k, 𝒥_k)∈ℝ^|ℐ_k|,|𝒥_k|, m̅_k = m_k(𝒥_k)∈ℝ^|𝒥_k|, e̅_k = e_k(ℐ_k)∈ℝ^|ℐ_k|, and 𝒥(m̅_k) is the binary sparsity pattern of m̅_k. This results in small least squares problems which can be solved directly, for example, via QR factorization. The deficiency of this approach is that it is hard to predict a sparsity pattern a priori that will ensure an effective preconditioner. Mostly common choices used are the sparsity pattern of A, A^T, or a power of a sparsified A, although generally its not guaranteed that the preconditioner produced will be effective. For overcoming this difficulty, many authors proposed iterative approaches. In one such approach, one starts with an initial sparsity pattern and adds nonzeros to this pattern until ‖ e_k - Am_k‖_2≤ε becomes true for some threshold ε or the maximum number of nonzeros has been reached. For a more detailed explanation of this type algorithm, see, e.g., the work by Cosgrove et al. <cit.>, Grote and Huckle <cit.>, and Gould and Scott <cit.>. The most successful among these algorithms is that of Grote and Huckle <cit.> which is commonly used to compute a SPAI preconditioner <cit.>, and which we use in the present work. To overcome the difficulty of choosing the sparsity pattern a priori for a resulting effective preconditioner, the authors in <cit.> proposed an adaptive approach that dynamically determines the most beneficial nonzero indices to include. Algorithm <ref> is one specific variant of Grote and Huckle's algorithm, which is taken from <cit.>. The algorithm requires an input matrix A, 𝒥 as the initial binary sparsity pattern, ε as the convergence tolerance, α, for the maximum number of iterations for each column, and β, for the maximum number of nonzeros added to the pattern in each iteration. The algorithm for each column solves the linear least squares problem (<ref>) for a given initial sparsity pattern 𝒥 and computes the residual s̅_k (lines <ref>-<ref>). This column is considered finished when the 2-norm of the residual is less than the threshold ε. Otherwise, we continue adding entries to 𝒥. We construct an index set ℒ_k in line <ref> which contain the nonzeros entries in s̅_k. From the index set ℒ_k, for every element ℓ we go through that ℓth row of A and choose the column indices of the nonzero entries for which we define a set name 𝒥̃_k which are not 𝒥_k. The set 𝒥̃_k is the union of the sets 𝒩_ℓ which contain the potential indices that can be added to 𝒥_k, out of which we select only a subset of the “most important” indices. There are many ways to determine which indices are most important. Grote and Huckle's technique considers a univariate minimization problem, through which the quantity ρ_jk computed in line <ref> gives a measure of the 2-norm of the new residual if index j is added to 𝒥_k. A well-known heuristic (see, e.g., <cit.>) is to mark indices as “acceptable” if their ρ_jk is less than the arithmetic mean ρ̅_k over all j. Then we choose up to β of the best (smallest ρ_jk) indices acceptable to add (lines <ref>-<ref>) in each of the α iterations. In line <ref> there is no need to recompute the QR factorization fully in each step; the factorization can be updated by using the QR factorization computed in the previous step and the entries added to A̅_k; see <cit.>. Typical values for the parameters are ε∈ [0.1,0.5], α∈{1,…,5}, and β∈{3,…,8} <cit.>. In SPAI, although each column can theoretically be computed in parallel, the construction is often costly, specially for large-scale problems; see, e.g., <cit.>. SPAI memory requirements scale quadratically and the computational cost scales cubically in the number of nonzeros per row <cit.>. Thus applying the bucketing idea to sparse approximate inverse preconditioner in which low precision is used for the buckets containing elements of smaller magnitude has the potential to significantly reduce this cost. For modern hardware like GPUs, the construction of efficient sparse approximate inverse computations has been the subject of much recent work; see, e.g., <cit.>. §.§ Adaptive Precision Sparse Matrix-Vector Products As mentioned, with the emergence of low precision arithmetic, such as half precision fp16 or bfloat16 on modern computers, mixed precision algorithms in numerical linear algebra have received renewed attention. Many variants of mixed precision algorithms have been recently proposed; see, for example, the works <cit.> on matrix multiplication. The works <cit.> proposed mixed precision iterative refinement methods based on preconditioned Krylov subspace methods. The authors in <cit.> proposed a general preconditioning technique based on a low-rank approximation of the error. A particularly fruitful idea is the concept of adaptive precision algorithms, in which the precisions used need not be determined a priori, but are instead dynamically set based on the data involved in the computation and perhaps some user-specified accuracy constraints. Often, the precisions chosen are proportional to importance of the data, which is inherently application dependent. For example, the authors in <cit.> introduced an adaptive precision block Jacobi preconditioner with idea of choosing the precision of each block based on its condition number. Amestoy et al. <cit.> introduced mixed precision block low rank compression that partitions a low rank matrix into several low-rank components of decreasing norm and stores each of them in a correspondingly decreasing precision. Ahmad et al. <cit.> introduced an algorithm for sparse matrix-vector products that switches the elements in the range of [-1, 1] to single precision while keeping the other elements in double precision. The authors in <cit.> develop a “quantized” dot product algorithm, adapting the precision of each vector element based on its exponent. In recent work, which is the focus of the present paper, Graillat et al. <cit.> develop an adaptive precision sparse matrix-vector product algorithm with the idea of adapting the precision of each matrix element based on its magnitude. The elements of the matrix are split into different buckets and different precisions are used to store and compute with elements in each bucket. Buckets with smaller elements are stored in lower precision. This approach is used to apply the matrix A to a vector within GMRES-IR with Jacobi preconditioning. We now give an overview of the results of <cit.>. For matrix-vector products in a uniform precision, the Oettli-Prager<cit.>, <cit.> and Rigal-Gaches<cit.>, <cit.> theorems give the formula for normwise backward error, ε_nw=min{ε:ŷ = (A+Δ A)x, Δ A ≤εA } = ŷ-y/yx. A bound on the normwise backward error for the uniform precision case is ε_nw≤ pu, where p is the maximum number of nonzero elements per row of A; see, e.g., <cit.>. The idea of the adaptive precision sparse-matrix vector product approach of Graillat et al. <cit.> is, for a given set of q precisions i.e u_1 < u_2 … < u_q, to split the elements of the matrix A into q buckets based on the magnitude of the elements. Using this approach splits the nonzeros elements in each row i of the computed M into up to q buckets and then computes the partial inner products associated with each bucket in up to q different precisions. The partial inner products are then all summed in precision u_1. We briefly recall the notation, algorithm, and key points of the error analysis given in <cit.>. Let J_i denote the set of column indices of the nonzero elements in row i of A. Each row i of the matrix A will be partitioned into the q buckets B_ik⊂ [1,n] for k=1:q. How we define the buckets will affect the resulting normwise (or componentwise) backward error. Assume that we want to construct the buckets B_ik in such a way that the backward error obtained is at most of order O(ϵ), where ϵ is the user defined target accuracy with ϵ ≥ u_1. We can define the buckets as B_ik = { j∈ J_i : |a_ij| ∈ P_ik}, with P_ik= (ϵ‖ A ‖/u_2, +∞) for k=1 (ϵ‖ A ‖ /u_k+1, ϵ‖ A ‖/u_k] for k=2:q-1 [0, ϵ‖ A ‖/u_q] for k=q . The procedure for placing elements of a matrix A into buckets according to this rule is given in Algorithm <ref>. The partial inner product y_i^(k) = ∑_j∈ B_ik a_ijx_j associated with bucket B_ik is computed in precision u_k, and all partial inner products are accumulated in precision u_1 (the highest precision). This procedure is given in Algorithm <ref>. Theorem 3.1 in <cit.> states that if y=Ax is computed using this approach, then we have ε_nw≤ (q-1) u_1 + cϵ, where c= (1+ (q-1)u_1 )+ max_i∑_k=1^qp_ik^2(1+u_k)^2, and p_ik is the number of elements in B_ik. We note that Graillat et al. also provide different bucketing strategies that give guaranteed bounds on the componentwise backward error. The drawback of these is that the bucketing scheme depends on the values in the vector x to be multiplied, and thus the bucketing would need to be redone for each matrix-vector product encountered. Thus for practical reasons we restrict ourselves to the variant which provides normwise error bounds. § GMRES BASED ITERATIVE REFINEMENT WITH BSPAI Our approach will be to apply the adaptive precision sparse-matrix vector product described in Section <ref> to the application of a sparse approximate inverse preconditioner M computed using <ref> within GMRES-based iterative refinement. The resulting algorithm, which we refer to as BSPAI-GMRES-IR, is given as Algorithm <ref>. Our aim is to derive the conditions under which BSPAI-GMRES-IR (Algorithm <ref>) will converge. We can determine the resulting backward and forward errors in GMRES when we use the adaptive precision SpMV to apply the preconditioner M within each GMRES iteration. We will assume here that matrix-vector products with A are computed in precision u_p within GMRES (where we will generally take u_p=u_g=u, using the notation of <cit.>). Note that we could, in principle, also use the adaptive precision SpMV to apply A to a vector; extending the analysis to this case is simple and the results will not be significantly different as long as u_p ≈ϵ_bspai. We give backward and forward error bounds for GMRES for this case as well below. Following <cit.> and <cit.>, let z_j = MA v̂_j be computed in each iteration of MGS-GMRES as described above, where A is applied in precision u_p and M is applied using the adaptive precision SpMV approach (Algorithm <ref>). Then (A+Δ A) v̂_j = ŵ_j, Δ A _F≤γ_q^p A_F (M + Δ M) ŵ_j = ẑ_j, Δ M_F ≤((q-1)u_1 + cϵ) M_F. Then ẑ_j = (M+Δ M)(A+Δ A) v̂_j ≈ (MA + MΔ A + Δ MA)v̂_j = MAv̂_j + f_j, where f_j = (MΔ A + Δ A M)v̂_j. We can bound the norm of this quantity by f_j _2 ≲γ_q^p M_F A_F + ( (q-1) u_1 + c ϵ) MA ≤ (q u_p + (q-1)u_1 + cϵ) M _F A_F v̂_j _2. This means that we can apply <cit.> with ϵ_p = (qu_p + (q-1)u_1 + c ϵ) M_FA_F/MA_F. Note also that we must apply the preconditioner M to the right-hand side r̂_i. Denoting s_i = Mr̂_i, the computed ŝ_i satisfies ŝ_i = (M+Δ M) r̂_i = s_i + Δ M r̂_i. We then have ŝ_i - s_i _∞ ≤((q-1)u_1+cϵ) M_∞r̂_i _∞ ≤((q-1)u_1+cϵ) κ_∞(M) s_i _∞. Letting Ã=MA, and assuming we are solving the n× n linear system Ãd_i=ŝ_i, the conclusions of <cit.> say that for MGS-GMRES in working precision u_g, except for products with à which satisfy fl(Ãv) = Ãv + f, f_2 ≲ϵ_p Ã_F v_2, as long as σ_min(Ã) ≳(k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_F, then for some step k≤ n, the algorithm produces an approximate solution d̂_i satisfying (à + ΔÃ) d̂_i = ŝ_i + Δŝ_i, Ã_F ≲(k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_F, Δŝ_i_2 ≲γ̃_kn^g ŝ_i_2 ≲ n^1/2γ̃_kn^g s_i_∞. From (<ref>), we can write s_i - Ãd̂_i = ΔÃd̂_i - (ŝ_i - s_i ) - Δŝ_i, which we can bound using (<ref>), (<ref>), and (<ref>), giving ‖ s_i - Ãd̂_i ‖_∞ ≤ΔÃ_∞d̂_i_∞ + ŝ_i - s_i_∞ - Δŝ_i ≤ n (k^1/2(qu_p + (q-1)u_1 + c ϵ) M_FA_F/Ã_F + γ̃_kn^g) Ã_∞d̂_i_∞ ≤+ ((q-1)u_1+cϵ) κ_∞(M) s_i _∞ + n^1/2γ̃_kn^g s_i_∞ ≤ n (k^1/2n(qu_p + (q-1)u_1 + c ϵ) κ_∞(M) + γ̃_kn^g) Ã_∞d̂_i_∞ ≤+ ((q-1)u_1+cϵ) κ_∞(M) s_i _∞ + n^1/2γ̃_kn^g s_i_∞ ≤ kn^2 ( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ) ( Ã_∞d̂_i_∞ + s_i _∞). Thus the normwise relative backward error of the system Ãd̂_i = s_i is bounded by s_i - Ãd̂_i_∞/Ã_∞d̂_i_∞ + s_i _∞≲ f(n,k)( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ), and thus the relative error of the computed d̂_i is bounded by d_i -d̂_i_∞/d_i_∞≲ f(n,k)( u_g + (qu_p + (q-1)u_1+cϵ)κ_∞(M) ) κ_∞(Ã), where f(n,k) = kn^2. From (<ref>) and (<ref>), we can say that if u_1≈ϵ≈ u_p, then the backward and forward errors in MGS-GMRES with adaptive precision SpMV used to apply M will be approximately the same as the case of uniform precision SpMV; see <cit.>. We note that in the case where we use the adaptive precision SpMV also in applying the matrix A to a vector within GMRES, the bound for the normwise relative backward error in (<ref>) becomes s_i - Ãd̂_i_∞/Ã_∞d̂_i_∞ + s_i _∞≲ f(n,k)( u_g + (2(q-1)u_1+(c_A+c_M)ϵ)κ_∞(M) ), where we assume that the same buckets are used for both M and A, and c_A and c_M are the values of c in (<ref>) associated with A and M, respectively. Similarly, the relative forward error becomes d_i -d̂_i_∞/d_i_∞≲ f(n,k)( u_g + (2(q-1)u_1+(c_A+c_M)ϵ)κ_∞(M) ) κ_∞(Ã). Thus if u_1≈ϵ, MGS-GMRES with the adaptive precision SpMV used for applying both M and A will produce backward and forward errors similar to the MGS-GMRES variant in <cit.> with the setting u_p = u_1. § NUMERICAL EXPERIMENTS We perform numerical experiments to evaluate the performance of BSPAI-GMRES-IR by comparing it with SPAI-GMRES-IR in <cit.>. We stress that we only expect BSPAI-GMRES-IR to have a clear potential advantage over SPAI-GMRES-IR for the case u_f=u. Otherwise, for example, if u_f= half and u= single, SPAI-GMRES-IR stores the preconditioner entirely in precision u_f but applies it in precision u. BSPAI-GMRES-IR, on the other hand, stores the preconditioner in multiple precisions, where we must have u_1= single in order to enable reading effective application precision ϵ≈ u. We also note that this motivates future work in the direction of decoupling the storage and application precisions in adaptive precision sparse matrix-vector products. All the experiments are performed in MATLAB R2021a. The matrices we tested are taken from the SuiteSparse Matrix Collection <cit.>. We run the experiments using four precisions which are half, single, double, and quadruple. For properties of these precision, see Table <ref>. For half precision, we use the library[]. We use MATLAB built-in datatypes for single and double precision and the Advanpix Multiprecision Computing Toolbox for quadruple precision; see <cit.>. The code for reproducing the experiments in this paper is available online[]. Matrices used in the experiments along with their key properties are listed in Table <ref>. We set the right-hand side to the vector with equal components and unit 2-norm in all tests. For the GMRES tolerance, we set τ = 10^-4 in the case working precision is single and τ = 10^-8 for the case working precision is double, which responds to roughly the square root of the working precision. These values are set by default used in the previous works by <cit.>, <cit.>, and are also used in practical applications. In all invocations, we use the GMRES setting u_g=u_p=u, which is commonly used in practice. We tested the matrices in Table <ref> with a subset of the settings (u_f, u, u_r)= (double, double, quad), (u_f, u, u_r)= (single, double, quad), and (u_f, u, u_r)= (half, single, double), depending on whether SPAI-GMRES-IR converges with the given precisions and value of τ. We choose the identity matrix as the initial sparsity pattern for SPAI in all tests. When A has zero entry on the diagonal, this results in a zero column in the SPAI preconditioner, as mentioned in Sedlacek <cit.>. Therefore we only choose problems with nonzero entries on the diagonal, but note that this could be remedied by either permuting A or using the initial sparsity pattern of A, which, when SPAI is run on A^T, guarantees that we obtain a with nonzero rows <cit.>. In all tests, the matrices are preprocessed with column scaling such that the absolute value of the largest value in every column of A^T is 1. This one-sided scaling was proposed in <cit.> to avoid overflow in the computation of QR due to low precision. To be specific, for obtaining M, we run SPAI on the scaled A^T D and then set M = M^T D, where is the D diagonal scaling matrix. In both BSPAI-GMRES-IR and SPAI-GMRES-IR we use the variant in which u_g=u_p=u, which is commonly used in practice. For all tests, we use β=8, which is in the range suggested by Sedlacek <cit.>. For BSPAI-GMRES-IR, when u is double, we use the precisions with u_1= double, u_2 = single, u_3 = half, and u_4=1. When u is single, we use the precisions u_1 = single, u_2 = half, and u_3=1. Note that the choice u_1=1 enables the dropping of elements in M, as described in <cit.>. For each linear system and given combination of precisions, we run BSPAI-GMRES-IR with various values of ϵ≥ u_1, and use the same value of ε for both BSPAI-GMRES-IR and SPAI-GMRES-IR. We report our results in a series of tables. The first column of the table lists the matrix name, and the second column indicates whether we use BSPAI or SPAI and the corresponding parameters. The third column gives the infinity-norm condition number of the preconditioned coefficient matrix. The fourth column gives information about the number of nonzeros and their storage precisions. The first number gives the total number of nonzeros, and the tuple that follows gives information about the precisions: element i in the tuple gives the number of nonzeros stored in precision u_i. The fifth column gives the storage cost of the BSPAI preconditioner with mixed precision storage as a percentage of the cost of the SPAI preconditioner with uniform precision storage (the lower the better). The final column gives information about convergence of the iterative refinement process. The first number gives the total number of GMRES iterations over all refinement steps, and element i of the tuple that follows gives the number of GMRES iterations in refinement step i. Thus the number of elements in the tuple gives the number of iterative refinement steps required until convergence of the forward and backward errors to the level of the working precision. For each setup we form one table with five columns in which first one represent matrices names, second for the preconditioner( SPAI and BSPAI), third for the condition number of the preconditioned system, fourth for the total number of nonzeros (number of nonzeros in each bucket) and the last column is for the information about the number of GMRES-IR refinement steps and GMRES iterations per refinement step. §.§ Experiments with (u_f, u, u_r) = (double, double, quad) Table <ref> shows the experiments for the setting (u_f, u, u_r)= (double, double, quad), with both ϵ=2^-53 and ϵ=2^-37. First, we note that where SPAI-GMRES-IR converges, BSPAI-GMRES-IR also converges, as predicted by our theoretical results, although of course the adaptive precision storage can result in a different total number of GMRES iterations across the refinement steps. The performance for the matrix using ε=0.1 with ϵ=2^-53 and ϵ=2^-37, BSPAI-GMRES-IR takes 21 total GMRES iterations to converge to double precision accuracy while SPAI-GMRES-IR takes total 14 GMRES iterations. The storage (and computation) savings of the adaptive precision approach can be significant for this case; using ϵ = 2^-53 and ϵ=2^-37, requires only 74.9% and 42.6% of the storage/computation cost as the uniform precision approach, respectively. This matrix perhaps represents a best-case scenario. For , we also see reasonable reductions in storage cost for the two choices of ϵ; note that although the choice ϵ=2^-37 results in significant storage savings, the number of GMRES iterations required increases significantly. For some matrices, such as and , there appears to be no benefit to the adaptive precision approach. §.§ Experiments with (u_f, u, u_r) = (single, single, double) Table <ref> shows the experiments for the setting (u_f, u, u_r)= (single, single, double) with u_1 = single, u_2= half, and u_3= 1, and with ϵ=2^-24 and ϵ=2^-18. In all tests, both SPAI-GMRES-IR and BSPAI-GMRES-IR converge to single precision accuracy. For the value ϵ=2^-24, BSPAI-GMRES-IR takes about the same number of iterations as SPAI-GMRES-IR and requires an average of 98.6% of the storage cost as the uniform precision approach. The performance of BSPAI-GMRES-IR for the matrix using ϵ=2^-18 requires 76.5% of the storage cost as uniform precision and converges in the same number of iteration as SPAI-GMRES-IR. For matrices , , and , although using ϵ=2^-18 results in storage costs of 66.7%, 70.7% and 73.8% of the uniform precision approach, respectively, a greater number of iterations are required than in the uniform precision SPAI-GMRES-IR. § CONCLUSIONS AND FUTURE WORK In this work we use an adaptive precision sparse approximate inverse preconditioner within mixed precision GMRES-based iterative refinement. Using the approach of Graillat et al. <cit.>, after computing a sparse approximate inverse in low precision, we place elements of the sparse approximate inverse preconditioner into buckets for a given set of precisions based on their magnitude. We then apply the preconditioner to a vector in mixed precision within five precision GMRES-IR; we call this algorithm variant BSPAI-GMRES-IR. We then analyze the behavior of the backward and forward errors of mixed precision left-preconditioned GMRES method, which uses the bucketed sparse approximate inverse as a left preconditioner. Our analysis shows that if we choose u_1≈ε≈ u_p, then the normwise backward and forward errors will be close to those we get in the case that we use uniform precision. This indicates that BSPAI-GMRES-IR will converge under the same conditions as SPAI-GMRES-IR. We performing a set of numerical experiments which shows that the adaptive sparse-matrix vector product approach can reduce the cost of storing and applying the sparse approximate inverse preconditioner, although a significant reduction in cost often comes at the expense of increasing the number of GMRES iterations required for convergence. We note that it is possible to extend this approach to other preconditioners for Krylov subspace methods. We again stress that a fruitful potential area of future work is to extend the adaptive sparse-matrix vector product approach to decouple the storage and computation precisions. This would make this approach beneficial for existing cases where we would ideally like to store a matrix in lower precision and apply it to a vector in a higher precision, which is often the case within SPAI-GMRES-IR. Other potential future work involves the development and analysis of other adaptive-precision matrix computations, such as triangular solves. siamplain
http://arxiv.org/abs/2307.04106v2
20230709060722
Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird's Eye View
[ "Jiayu Yang", "Enze Xie", "Miaomiao Liu", "Jose M. Alvarez" ]
cs.CV
[ "cs.CV" ]
Parametric Depth Based Feature Representation Learning for Object Detection and Segmentation in Bird’s-Eye View Jiayu Yang^1,3^*, Enze Xie^2, Miaomiao Liu^1, Jose M. Alvarez^3 ^1Australian National University, ^2The University of Hong Kong, ^3NVIDIA {jiayu.yang, miaomiao.liu}@anu.edu.au, [email protected], [email protected] Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ======================================================================================================================================================================================================================================== empty < g r a p h i c s > figureGiven multi-view images and camera parameters, our framework utilize parametric depth to transform image feature into BEV space for jointly estimating 3D object detection, BEV segmentation and a BEV visibility map. Recent vision-only perception models for autonomous driving achieved promising results by encoding multi-view image features into Bird's-Eye-View (BEV) space. A critical step and the main bottleneck of these methods is transforming image features into the BEV coordinate frame. This paper focuses on leveraging geometry information, such as depth, to model such feature transformation. Existing works rely on non-parametric depth distribution modeling leading to significant memory consumption, or ignore the geometry information to address this problem. In contrast, we propose to use parametric depth distribution modeling for feature transformation. We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view. Then, we aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame. Finally, we use the transformed features for downstream tasks such as object detection and semantic segmentation. Existing semantic segmentation methods do also suffer from an hallucination problem as they do not take visibility information into account. This hallucination can be particularly problematic for subsequent modules such as control and planning. To mitigate the issue, our method provides depth uncertainty and reliable visibility-aware estimations. [^*The work is done during an internship at NVIDIA] We further leverage our parametric depth modeling to present a novel visibility-aware evaluation metric that, when taken into account, can mitigate the hallucination problem. Extensive experiments on object detection and semantic segmentation on the nuScenes datasets demonstrate that our method outperforms existing methods on both tasks. § INTRODUCTION In autonomous driving, multiple input sensors are often available, each of which has its coordinate frame, such as the coordinate image frame used by RGB cameras or the egocentric coordinate frame used by the Lidar scanner. Downstream tasks, such as motion planning, usually require inputs in a unified egocentric coordinate system, like the widely used Bird's Eye View (BEV) space. Thus, transforming features from multiple sensors into the BEV space has become a critical step for autonomous driving. Here, we focus on this transformation for the vision-only setup where we take as input multi-view RGB images captured in a single time stamp by cameras mounted on the ego vehicle and output estimation results, such as object detection and segmentation, in a unified BEV space, see Fig. <ref>. In general, accurate depth information is crucial to achieve effective transformations. Early methods<cit.> forgo explicit depth estimation and learn implicit feature transformations using neural networks, which suffers from the generalization problem since the neural network does not have an explicit prior of the underlying geometric relations. More recent methods <cit.> adopt explicit but simplified depth representations for the transformation, which either requires large memory consumption, limiting the resolution <cit.>; or over-simplifies the representation leading to noise in the BEV space<cit.>. Moreover, these simplified depth representation do not have the ability to efficiently provide visibility information. As downstream tasks such as semantic segmentation are trained using aerial map ground truth, the lack of visibility estimation usually results in hallucination effects where the network segments areas that are not visible to the sensor <cit.>, see Figure <ref>. As a consequence, those estimations can mislead downstream planning tasks as it is extremely dangerous to drive towards hallucinated road but actually non-driveable, especially in high speed. To address these limitations, we propose to adopt explicit parametric depth representation and geometric derivations as guidance to build a novel feature transformation pipeline. We estimate a parametric depth distribution and use it to derive both a depth likelihood map and an occupancy distribution to guide the transformation from image features into the BEV space. Our approach consists of two sequential modules: a geometry-aware feature lifting module and an occupancy-aware feature aggregation module. Moreover, our parametric depth-based representation enables us to efficiently derive a visibility map in BEV space, which provides valuable information to decouple visible and occluded areas in the estimations and thus, mitigate the hallucination problem. We also derive ground-truth visibility in BEV space, which enables us to design a novel evaluation metric for BEV segmentation that takes visibility into account and reveals insight of selected recent methods <cit.> in terms of estimation on visible region and hallucination on occluded region. Our contributions can be summarized as follows: * We propose a geometry-aware feature transformation based on parametric depth distribution modeling to map multi-view image features into the BEV space. Our depth distribution modeling enables the estimation of visibility maps to decouple visible and occluded areas for downstream tasks. * The proposed feature transformation framework consists of a novel feature lifting module that leverages the computed depth likelihood to lift 2D image features to the 3D space; and a feature aggregation module to project feature to the BEV frame through the derived 3D occupancy. * We further propose a novel visibility-aware evaluation metric for segmentation in BEV space that reveals the insight of estimation on visible space and hallucination on occluded space. Extensive experiments on the nuScenes dataset on object detection and semantic segmentation demonstrate the effectiveness of our method yielding state of the art results for these two tasks with a negligible compute overhead. § RELATED WORK External depth based feature transformations. When given depth input either from Lidar sensor or stereo matching, image feature can easily be transformed into BEV space<cit.>. PointPillar<cit.> extract features from a 3D point cloud and aggregate the features into BEV space. PseudoLidar<cit.> based methods firstly estimate a depth using stereo matching given stereo image pair as input followed by unprojecting the feature based on estimated depth. However, in real-life applications, Lidar sensors or stereo image inputs are not always available, which limits these line of methods. Feature transformations without reliable depth input. Without reliable depth input, various feature transformation methods have been proposed<cit.>, starting from early methods<cit.> that learn implicit feature transformations using neural networks. Learned transformation can suffer from the generalization problem, since the neural network does not explicitly account for changes in cameras' intrinsic and extrinsic parameters. Recent methods <cit.> adopt various depth representations to explicitly transform features based on multi-view geometry to the BEV space. The key in these methods is the underlying depth representation, which dominates the resolution and accuracy the feature transformation module can achieve. For instance, LSS <cit.> adopts a non-parametric depth representation. It represents depth as a discretized probability density function along each visual ray, which can be treated as a categorical distribution of depth. It can further form the depth probability volume in LSS for all pixels in an image. When the sampling rate is sufficient, such non-parametric depth distribution can adequately represent a large variety of depths, including multi-modal depth distributions. In practice, however, to estimate such depth representation, the backbone needs to estimate a probability volume that is cubic with the input image size and increases significantly along the number of input images, which limits the image and depth resolution. To address this limitation, M^2BEV <cit.> adopts a simplified depth representation assuming the depth of all pixels follows a uniform distribution. Under this assumption, features are directly lifted to every location on the visual ray, resulting identical feature along the entire ray with no difference. Following works <cit.> followed similar depth representation. Such simplified representation have advantage on efficiency, as the backbone network do not need to estimate any parameter for the depth, but can cause ambiguity and noise in the 3D space. Unlike the non-parametric depth distribution used in <cit.> or the uniform depth distribution in M2BEV<cit.>, we adopt a parametric depth distribution to model pixel-wise depth for feature lifting. Parametric depth distribution represents depth as a continuous distribution such as Gaussian or the Laplacian distribution, and its estimated distribution parameters can be used to evaluate depth likelihood or depth probability on any given depth value along each ray. To model the depth for a pixel, it takes only two parameters (μ,σ) for Gaussian and two (μ,b) for Laplacian, so it can be more efficient than non-parametric distribution. Moreover, its continuous nature allows evaluating depth likelihood on any points along the visual ray, which can achieve a higher depth resolution than the diescretized non-parametric distribution. We specifically designed our pipeline incorporating parametric depth to improve 2D-BEV feature transformation and also propose the derivation of visibility for subsequent planning tasks and visibility-aware evaluations. Aggregating 3D feature into BEV space. Given the lifted feature in 3D space, most existing works including LSS <cit.> and M^2BEV <cit.> use the feature concatenation method introduced by Pointpillars<cit.> for transforming 3D features into BEV space. The 3D feature volume is split along horizontal dimensions and interpreted as pillars of features. Then, a feature vector is created by concatenating features along the vertical dimension for each pillar. All the concatenated features form a 2D feature map, which is converted into BEV feature map by few convolution layers. This design allows each voxel along the Z-axis to have equal contribution to the final BEV feature. However, this method can be affected by noisy features on empty spaces. We thus propose to compress the features based on a calculated space occupancy probability from the parametric depth distribution. Our proposed method can largely reduce the influence of those empty voxels to the aggregated features. Joint Detection and Segmentation in BEV space. M^2BEV recently proposed a unified detection and segmentation framework in BEV space, which we leverage to evaluate the effectiveness of our method. Specifically, the image features are transformed into a unified BEV feature, which is used by two parallel heads, a detection head and a segmentation head, to achieve multi-task estimation. M^2BEV leverage a detection head design from Lidar-based detection methods <cit.> and modify it to better suit camera-based methods. Their segmentation head is inspired by the design from <cit.>. However, in contrast to prior work, we leverage the proposed explicit feature transformations based on parametric depth to address its weaknesses. Temporal extension. Few concurrent methods <cit.> proposed to utilize temporal information to further boost segmentation and detection performance in BEV space and achieved promising results. Most of these methods, including BEVFormer<cit.>, BEVerse<cit.>, BEVDet4D<cit.> are based on the feature transformation module in LSS<cit.>. <cit.> adopt depth supervision and temporal stereo matching to improve depth quality and further propose a more efficient implementation of LSS's Lift-splat step. <cit.> query 2D features from projected location of 3D voxels, which does not explicitly use depth and is similar to the uniform depth assumption in M^2BEV<cit.>. Our contributions focusing on depth representation, feature transformation and visibility estimation is orthogonal to the temporal extension of these methods and our method can potentially be applied to these methods to further boost their performance and enable the efficient visibility inference. § METHOD Let us now introduce our framework to jointly perform segmentation and object detection. Shown in Fig. <ref>, our framework comprised of three fundamental components: feature extraction, feature transformation, and multi-task estimation. The framework's key contributions include a parametric depth decoder integrated into the feature extraction, a geometry-aware feature lifting module, and an occupancy-aware feature aggregation module. Furthermore, we introduce a visibility estimation module as a constituent of the multi-task estimation that provide crucial visibility information for down-streaming planning tasks. §.§ Problem Statement Let { I_i} _i=1^N,  I_i∈ℝ^H× W × 3, be a set of RGB images taken at the same time slot, H and W define the image dimension, and { K_i, R_i, T_i}_i=1^N represent the intrinsic and extrinsic parameters for their corresponding camera poses, respectively. We focus on lifting the image features f_i^2D∈ℝ^H× W × CH to the 3D space as f^3D∈ℝ^X'× Y' × Z'× CH and then aggregate them to the BEV space as f^BEV∈ℝ^X× Y × CH_B for 3D object detection and segmentation. §.§ Parametric Depth Distribution Modelling Let us first introduce our parametric depth distribution modelling. Given an image I_i, we extract its latent features f_i^T using a backbone network followed by a image feature decoder network to extract 2D image features, f_i^2D, see fig. <ref>. Then, following depth estimation methods <cit.>, we adopt a Laplacian distribution to model depth in real-world scenarios where the depth distribution for each pixel is given by, ℒ(d|μ,b) = 1/2bexp(-|d-μ|/b), where μ provides an estimation of the depth, and b is the diversity parameter of the distribution, see Fig. <ref>. The goal in this module is to estimate (μ, b). We design the parametric depth decoder network Φ_θ to map the latent feature to the parameter space of the depth distribution: Φ_θ: ℝ^H× W× CH_T→ℝ^H× W× 2, where CH_T is the latent feature dimension. Note that when the ground-truth depth for each pixel is known, the depth distribution becomes a delta function, where the depth probability p(d_gt) on ground-truth depth d_gt is one and zero anywhere else. However, in practice, the depth is unknown for each pixel. Given our modelled depth distribution, we can calculate the depth likelihood analytically based on our parametric modelling. Fig. <ref> shows an example of depth distribution where μ gives an estimate of the depth and b could be interpreted as the uncertainty of each estimation. Larger values of b correspond to areas where the estimation is more uncertain. §.§ Geometry-aware Feature Lifting Fig. <ref> depicts our geometry-aware feature lifting module to transform the 2D image features f_i^2D∈ℝ^H× W× CH from the camera coordinate system into 3D space defined for the ego vehicle coordinate system, generating the 3D feature volume f_i^3D∈ℝ^X'× Y'× Z'× CH_I. Ideally, the 2D image feature for each pixel is back-projected along the visual ray to the 3D location defined by its ground truth depth value f^3D( P_gt) = f^2D( p), where P_gt = d_gt K_i^-1p̃, p̃ is the homogeneous coordinate for p. Without knowing the true depth value for each pixel, we discretize the 3D space into voxels and thus aggregate the feature for each voxel by forward projecting it to multi-view images. Precisely, let P_j = (x_j, y_j, z_j)^T define the 3D coordinate of centre for voxel j. Given the camera poses for multiple views, we project it to image I_i as d^i_jp̃^i_j = K_i( R_iP̃_j+ T_i) where p̃^i_j denotes the homogenous coordinate of p^i_j in image I_i. Meanwhile, we can obtain the depth value of P_j in view i as d^i_j. Based on our parametric depth modelling, we obtain the likelihood of d^i_j being on the object surface as α_d^i_j = ℒ(d^i_j|μ^i_ p^i_j,b^i_ p^i_j) = 1/2b^i_ p^i_jexp(-|d^i_j-μ^i_ p^i_j|/b^i_ p^i_j). We similarly project the voxel to all views and aggregate the feature for the j-th voxel as f_j^3D = ∑_i=1^Nα_d^i_j f_i^2D( p^i_j), where f_i^2D is the extracted image feature. We adopts bilinear interpolation to obtain f_i^2D( p^i_j) when p^i_j is a non-grid coordinate. All lifted 3D features form the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH, which is then aggregated by our occupancy aware feature aggregation module into 2D BEV feature, introduced in the following section. §.§ Occupancy-aware Feature Aggregation Our occupancy-aware feature aggregation module aggregates the 3D feature volume f^3D∈ℝ^X'× Y'× Z'× CH from ego vehicle 3D coordinate frame into BEV space, forming BEV feature map f^BEV∈ℝ^X× Y× CH_B. As shown in Fig. <ref>, the 2D BEV coordinate system is aligned with the XY plane of the ego vehicle coordinate system where the shared origin is defined on the center of the ego vehicle. Note that the BEV coordinate system only has 2 dimensions, forgoing the Z dimension. The goal of the feature aggregation is to transform the 3D feature volume in ego vehicle coordinate into a 2D feature map in the BEV space, which can be treated as aggregating the 3D feature volume along its Z axis. To this end, we first rearrange the previously computed depth likelihood for all voxels by Eq. <ref> into a depth likelihood volume P^3D∈ℝ^X'× Y'× Z', which shares the same volumetric coordinate as that of 3D feature volume f^3D. For each column along the Z-axis in the depth likelihood volume, the likelihood of each voxel of different height reflects its spatial occupancy. Thus, we normalize the depth likelihood along Z axis into a spatial occupancy distribution, forming a spatial occupancy volume O^3D∈ℝ^X'× Y'× Z' defined as O^3D(x,y,z) = P^3D(x,y,z) + b_o/∑_z_i=0^Z'-1P^3D(x,y,z_i) + b_o, where the b_o is a bias term to encourage an equal contribution of feature on completely occluded region. Our feature aggregation along the Z-axis could minimize the influence of features from empty voxels to the final feature in the BEV frame. Given the spatial occupancy volume O^3D, we compute the final 2D BEV feature as a weighted sum of 3D features f̂^BEV(x,y) = ∑_z_i=0^Z'-1 (O^3D(x,y,z_i)× f^3D(x,y,z_i)), where we use the normalized spatial occupancy distribution as the 3D feature weight. We further transform f̂^BEV via a few layers of convolution to obtain the final feature for BEV space f^BEV which is then applied to detection and segmentation tasks. §.§ Object Detection and Segmentation Given the BEV feature map, we use two heads for detection and segmentation. Specifically, we adopt the detection head and segmentation head from M^2BEV <cit.> without modification for fair comparison. The detection head consists of three convolution layers and outputs dense 3D anchors in BEV space along with category, box size, and direction of each object. The segmentation head consists of five convolution layers and outputs 2 classes predictions, road and lane, as originally defined by LSS<cit.>. §.§ Training Strategy We adopt supervised training strategy. We supervise the parametric depth estimation by maximizing its depth likelihood on ground-truth depth observations. Specifically, we minimize the negative log-likelihood loss ℒ_D using sparse ground-truth depth d_gt generated from sparse lidar measurements. Here ℒ represent Laplacian distribution and P^i_gt represent set of pixels where ground-truth lidar measurements is valid for image i. ℒ_D(θ) =∑_i=1^N∑_p∈𝒫^i-log(ℒ(d^p_gt,i|μ_i^p(θ), b_i^p(θ))) where 𝒫^i defines the set of pixel coordinates with valid ground truth depth map for view i. For detection head, we use the 3D detection loss used in PointPillars<cit.> as follows, where ℒ_loc is the total localization loss, ℒ_cls is the object classification loss, ℒ_dir is the direction classification loss, N_pos refer to the number of positive samples and β_cls, β_loc, β_dir are set to 1.0, 0.8, 0.8 accordingly. ℒ_det = 1/N_pos(β_clsℒ_cls + β_locℒ_loc + β_dirℒ_dir) Please refer to <cit.> for more details. For segmentation head, we use both Dice loss ℒ_dice and binary cross entropy loss ℒ_bce as segmentation loss ℒ_seg and use equal weight β_dice = β_bce = 1. ℒ_seg = β_diceℒ_dice + β_bceℒ_bce For the visibility map and additional outputs, since they are geometrically derived from the estimated parametric depth representation without any learned parameters, it's not necessary to apply supervision on them. § VISIBILITY §.§ Visibility Map The segmentation in BEV space mainly focuses on segmenting lane regions. However, those regions are not always visible in the camera views due to the occlusion of vertical scene structures such as building (see Fig.<ref>). We thus propose to use our parametric depth modeling to infer a visibility map which decouples visible and occluded areas and, will contribute to mitigate the hallucination effect. We define a visibility map V^BEV∈ℝ^X× Y to describe the visibility range of ego vehicle's multi-view cameras. Starting from the likelihood of the Laplacian distribution in Eq. <ref>, the occlusion probability B(d) of a voxel in 3D space that has a back-projected depth d in camera view is B(d) = ∫_0^dℒ(x|μ,b) dx. We derive this occlusion probability as follows. Firstly we find the indefinite integral of Eq. <ref> as F(x) = ∫_-∞^xℒ(x|μ,b)dx = 1/2exp(x-μ/b)  if  x < μ 1-1/2exp(-x-μ/b)  if  x ≥μ. Then we calculate the definite integral between [0,d] as the occlusion probability B(d), which is defined as B(d) = F(d) - F(0) = F(d)-1/2exp(-μ/b). In practice, this is computed very efficiently, without the need to perform the discrete integration of the depth likelihood over the range [0,d]. Based on the relationship between visibility and occlusion, we convert the occlusion probability B to visibility probability V by V(d) = 1-B(d) = 1 + 1/2exp(-μ/b)-F(d). To finally compute the visibility in BEV space, we take the maximum visibility probability along the Z axis to form the visibility map V^BEV. Ṽ^BEV(x,y) = max_z∈𝒵'V(x,y,z) where 𝒵'={0,1,2⋯ Z'-1}. The V^BEV is obtained via interpolation from Ṽ^BEV. §.§ Visibility-aware Evaluation For semantic segmentation where the ground-truth is usually generated using aerial images, it is not possible evaluate predictions in visible and occluded areas by using the standard evaluation metrics. Therefore, in this section, we follow a similar process as the one to generate the visibility map to derive a visibility-aware evaluation method for segmentation in BEV space. In this case, however, we project the lidar 3D points (ground-truth) into multi-view image space and use a depth completion network to obtain multi-view dense depth maps. This depth map is then used as the expected depth value to build a parametric depth representation F(θ_gt). We then evaluate the ground-truth depth likelihood on each voxel in 3D space using Eq. <ref>, forming the ground-truth depth likelihood volume L_gt. Finally, we derive the ground-truth visibility map in BEV space V using Eq. <ref> and Eq. <ref>. In this case, V reflects the maximum visibility of the multi-view cameras in BEV space. Thus, it can be used as a mask to explicitly evaluate results in BEV space subject to visibility. Specifically, we use a threshold τ_vis to split the predicted segmentation s_pred and ground-truth segmentation label s_gt into visible region {s^vis_pred,s^vis_gt} and occluded region {s^occ_pred,s^occ_gt}. We can then compute the IoU for the visible (IoU_vis) and occluded (IoU_occ) regions separately as s^vis = ∑_x∈𝒳,y∈𝒴s(x,y)× 1(V(x,y) ≥τ _vis),  s^occ = ∑_x ∈𝒳, y∈𝒴s(x,y)×1(V(x,y) < τ _occ), IoU_vis = s^vis_pred∩ s^vis_gt/s^vis_pred∪ s^vis_gt, IoU_occ = s^occ_pred∩ s^occ_gt/s^occ_pred∪ s^occ_gt where 𝒳={0,1,⋯,X-1}, 𝒴={0,1,⋯,Y-1}, and 1(·) is the indicator function. We also report the occlusion rate on nuScenes as the percentage of visible or occluded segmentation labels over total number of segmentation labels. § EXPERIMENTS In this section, we first detail our experimental settings, then we demonstrate the effectiveness of our approach on the nuScenes dataset, and, finally, we provide ablation studies on the main components of our method. §.§ Implementation Details Dataset. We conduct our experiments on the nuScenes dataset <cit.>. The nuScenes dataset provides video sequences along with multiple sensor outputs including Lidar, Radar, GPS and IMU, all of which are collected by calibrated and synchronized sensors mounted on an vehicle driving across Boston and Singapore. The dataset consists of 1000 sequences, split into 700 for training and 150 for validation and testing, respectively. Each sample provides six RGB images captured by 6 cameras with divergent viewing directions along with Lidar sparse 3D points, Radar sparse 3D points, GPS pose and IMU readouts. We follow <cit.> to generate ground-truth segmentation labels from the global map provided by nuScenes dataset. Evaluation metrics. We report our results using the same metrics as in the nuScenes benchmark. For detection, we report mean Average Precision (mAP) and the nuScenes detection score <cit.>. For segmentation, we follow LSS <cit.>, and report the mean IoU score (mIoU). In addition, we report results using the proposed visibility-aware evaluation detailed in Sec. <ref>. Unless specified, we report numbers on the validation set. Network architecture. We use a unified framework to demonstrate benefits of our depth-based feature transformation module. The network consists of a backbone image encoder and two decoding heads, one for segmentation and one for detection. We use ResNet with deformable convolution as the image encoder. For the decoding heads, we use the same architecture as the one in PointPillars <cit.>. We set the size of the intermediate 3D volume consisting of X'× Y'× Z' = 400×400×12 voxels, with a voxel size of 0.25m× 0.25m× 0.5m, respectively. The final BEV space dimension consists of X× Y = 200×200 grids. Each grid is of size 0.5m× 0.5m. Training and inference. During training, we use 6 RGB images and corresponding camera parameters as input. The training for parametric depth estimation is supervised by the ground-truth sparse Lidar points provided in the dataset. Ground-truth detection and segmentation labels are used to supervise the detection and segmentation heads. We set batch size to 1 per GPU and use 3 nodes with 8 Nvidia V100 GPUs. For inference, our method only requires the 6 input RGB images together with the corresponding camera parameters. §.§ Results We now compare our results with M^2BEV and other state-of-art methods on the nuScenes dataset. To facilitate the comparison to other approaches, we use ResNeXt-101 as the backbone of our method for detection and segmentation experiments and use ResNet-50 as the backbone for multi-task learning experiments and efficiency analysis. Detection. We report the results of our method and related state of the art methods in Tab. <ref> and Tab. <ref>, for the validation set and the test set respectively. For the validation set, we only include frame-wise camera-based methods. That is, we exclude those approaches using temporal information. For the test set, we include the latest results including Camera, Lidar, Radar and their combination. As we can see, in both sets, our approach outperforms all existing camera-based methods on both mAP and the NDS score. Segmentation. We now focus on evaluating our semantic segmentation results. We report our performance compared to state-of-the-art methods on the nuScenes validation set in Tab. <ref>. We also report a variant of our model trained without depth supervision (Ours*) to fairly compare with LSS <cit.>. Our method performs significantly better compared to LSS <cit.> on both road and lane segmentation and slightly better compared to M^2BEV <cit.>, the closest method to ours. Our model without depth supervision still outperforms existing methods. Interestingly, if we take the visibility into account, as shown in Tab. <ref> and Fig. <ref>, our method clearly outperforms the baselines on the visible areas while maintain the performance compared to M^2BEV on the occluded regions. These results evidence the benefits of our parametric depth approach. Joint detection and segmentation. Finally, we report results for jointly evaluating both tasks. In this case, we compare our results to the multi-task version of M^2BEV. We show results for this experiment in Tab. <ref>. Our method, once again, outperforms the baseline on both detection and segmentation tasks. These results further evidence the benefits of an improved depth representation in the 2D to 3D feature transformation process. Efficiency. Our parametric depth estimation requires the estimation of additional parameters compared to simplified depth estimation approaches. As shown in Tab. <ref>, our model requires slightly larger amount of memory; However, that does not lead to a significant increase in the inference time. §.§ Ablation Studies We carry out ablation experiments to study the influence of feature transformations on final detection and segmentation performance and the robustness of our model to calibration error. More ablation experiments can be found in supplementary material. We use ResNet-50 as the backbone for all ablation experiments. Feature transformations We evaluate the effectiveness of the parametric depth based feature lifting and aggregation module comparing with baseline non-parametric depth based lifting LSS<cit.>, baseline uniform depth based lifting similar to M^2BEV and the widely used Pointpillar<cit.> feature aggregation. Results are in Tab. <ref>. Our proposed parametric depth based lifting coupled with occupancy based feature aggregation achieved best performance for both detection and segmentation. Limitations. Like all camera based methods, our method can only provide reliable detection and segmentation results on visible region. On occluded region, although our method can provide hallucination results and visibility information, the results are not reliable for making critical driving decision. Following planning tasks should utilize the visibility and uncertainty information to achieve reliable planning. § CONCLUSION We propose a parametric depth distribution modeling-based feature transformation that efficiently transforms 2D image features to BEV space. By incorporating visibility inference, our method can provide crucial visibility information to down-streaming planning tasks. Moreover, our approach outperforms existing methods in both detection and segmentation tasks, making it a promising candidate for feature transformation in future works. In our future work, we aim to investigate the integration of temporal information to improve estimation accuracy. ieee_fullname
http://arxiv.org/abs/2307.04710v1
20230710171958
Remarks on the Axion Domain Wall Problem
[ "Michael Dine" ]
hep-ph
[ "hep-ph", "hep-th" ]
Machine learning potentials with Iterative Boltzmann Inversion: training to experiment Kipton Barros August 12, 2023 ====================================================================================== § TO DO Consider domain walls bounded by strings, along the lines of the old paper by Sikivie et al. Is there any change in the picture, e.g. due to attractive and repulsive forces between string elements? Might imagine always have order one domain per horizon before considering bias. Suppose, e.g., N. In light travel time to cross the horizon, system develops a large γ. Ask what fraction of domain wall energy might be in hadrons and interpret in terms of final collapse. Think about final collapse in terms of particle collisions. If principally axions, what are the collision products? § INTRODUCTION A Peccei-Quinn symmetry<cit.> has the potential to solve the strong CP problem and account for the dark matter of the universe <cit.>. Before considering cosmology, the axion decay constant, a priori, can take a broad range of values. Stellar astrophysics places a lower bound in the range of 10^9-10^10 GeV<cit.>. Big bang cosmology, with the assumption that the universe, in the past, was hotter than a GeV or so, places an upper limit of about 10^12 GeV. Attaining a symmetry of sufficient quality<cit.> to solve the strong CP problem, however, is quite a challenge. String theory, and more general considerations of quantum gravity, rule out exact, continuous global symmetries. So one expects that in the effective field theory at low energies, at the very least there will be Planck-suppressed operators which violate the symmetry. Even for the low range of f_a, operators of very high dimension must be suppressed to account for the smallness of θ<cit.>. One might try to account for this suppression by discrete symmetries<cit.>, but the symmetries must be quite large. String theory appears capable of avoiding this problem<cit.>, in the sense that PQ symmetries may be violated only by non-perturbative effects, which can be extremely small if the theory generates a small coupling constant. But in this case, a value of f_a much smaller than, say, typical scales associated with coupling constant unification, would be surprising. Larger scales are admissible if the universe was never much hotter than nucleosynthesis temperatures in the past. This might occur if there was a period where the universe was dominated by moduli; see, for example<cit.>. These observations arguably cast doubt on the PQ solution, and in any case, would seem to favor large values of f_a and a modified cosmology. In this paper, however, we will adopt the conventional picture that the universe was quite hot in the past and we will assume that there was a PQ transition after inflation, and focus on the problem of domain walls. In the scenario in which there is a PQ transition after inflation, domain walls are potentially problematic<cit.>.If the PQ symmetry is an exact, continuous global symmetry (up to anomalies), the theory has stable domain walls provided the coefficient of the QCD anomaly is (suitably normalized) an integer different than one. The domain wall energy density falls off as 1/R, where R is the scale factor, as opposed to the radiation energy density (1/R^4) or the matter dominated energy density 1/R^3. Typically, the domain walls dominate well before the present era, spoiling the successes of the Standard Cosmology. Two plausible solutions to this problem were put forward in<cit.>: * The coefficient of the anomaly is unity. * There is a small explicit breaking of the PQ symmetry, large enough to lead to collapse of the domain wall system. Even for the first solution, one has to consider the effects of cosmic strings<cit.>. The second solution has troubling features. The strength of the leading symmetry breaking operator is restricted to a narrow range. First, it must be small enough that the resulting θ satisfies the current experimental bounds. Typically this requires that the leading operator which breaks the continuous PQ symmetry is of very high dimension<cit.>. Second, the operator must be large enough that, if there are domain walls, these disappear before they come to dominate the energy density of the universe. Naively, for interesting values of f_a, the axion decay constant, these two conditions limit the symmetry breaking operator to a narrow range (we will take f_a = 10^12 GeV as our benchmark). If the suppression of high dimension operators is a consequence of discrete symmetries, these symmetries must be very large, but not too large<cit.>. Recently, the authors of <cit.> have revisited the domain wall problem. They put forward and then rule out a third possible solution: a bias in the domain wall ensemble favoring one of the ground states.They then argue that the second solution above is unlikely to work, except for relatively small values of the axion decay constant, small enough as to be problematic for stellar processes. They argue, in particular, that the collapsing domain wall system produces too much dark matter. Motivated by the appearance of high quality PQ symmetries in string theory, the present author has long been an advocate for high scale breaking of the PQ symmetry, which requires a breakdown of the Standard Big Bang Cosmology at temperatures not much higher than nucleosynthesis temperature. But in this note, we consider the possibility of a post-inflationary PQ transition and smaller axion decay constants, with a conventional thermal history for the university at least up to temperatures of a few GeV. We will focus on the issues associated with the second solution, small breaking of the Peccei-Quinn symmetry. We will consider the dark matter issue, demonstrating that, until the domains collapse, most of the excess energy is converted into kinetic energy of the domain walls. Once the domains shrink to sizes of order m_a^-1, this energy is converted to ultrarelativistic axions and hadrons. Provided that the the domain walls never dominate, these objects are relatively harmless. We will review the problem of accounting for small symmetry breaking, focussing on models where the PQ symmetry is an accident of a large discrete symmetry<cit.>. Such symmetries, at least at first sight, are not particularly plausible. The requirements, indeed, yield extremely large symmetries, yet the symmetry also cannot be too large if the breaking is to be sufficient to avoid a domain wall dominated universe. Given the seeming absurdity of the requirements on the symmetry breaking, we ask whether there might be some anthropic explanation. Taking a very conservative approach to the anthropic principle, where one asks whether the change of one particular parameter can rule out the existence of observers, anthropic constraints have the potential to restrict the strength of the symmetry breaking to the required range. This note is organized as follows. In the next section, we review some aspects of domain walls and the cosmic strings which bound them<cit.>. In particular, we discuss the sense in which one can systematically construct the domain wall from the chiral lagrangian, and also provide a simple, analytic domain wall solution in a particular limit. In section <ref>, we study the system in the presence of explicit breaking. We discuss constraints on the size of the breaking and the axion decay constant. We focus, particularly, on models where the PQ symmetry arises from a large discrete symmetry, noting that the symmetry must be quite large to accommodate the current limits on θ but (anticipating our cosmology discussion) can't be appreciably larger than this if it is to avoid domain wall catastrophes. In section <ref> we turn to cosmology. After reviewing aspects of the cosmological domain wall problem, we demonstrate that the wall collisions principally produce gravitational waves and note that their energy density can readily be in a suitable range. We then turn to the coincidence problem in section <ref>, arguing that it places requirements on a theory which call out, if a PQ symmetry is realized in nature in this fashion, for an anthropic solution. As noted above, we will see that such a solution is plausible. It is hard to see how otherwise there would be any solution at all. Our conclusions are presented in section <ref>. § DOMAIN WALL GENERALITIES Suppose we have an exact Peccei-Quinn symmetry up to the anomaly. Under the PQ symmetry, the various fields transform by phases e^i q_pqα. By convention, we take the PQ charges, q_pq to be integers. θ changes, in general, by 2 π N for some integer N under this transformation. Given 2π periodicity of θ, the symmetry is in fact Z_ N. For N 1, there are domain walls. The tension of the domain walls is of order T ∼ m_a f_a^2 ∼ m_π f_π f_a. We will have in mind f_a ∼ 10^11- 10^12  GeV in what follows. It is of interest to ask whether, within the framework of chiral perturbation theory and/or large N, we can write a strict equality for the domain wall tension. §.§ Domain Wall Solutions from the Chiral Lagrangian Just as one can compute the axion mass using the chiral lagrangian, one can obtain the domain wall solutions in the case that the system supports axionic domain walls. Suppose that the light quarks have PQ charges q_i (we will mainly write explicit formulas for the case of two light quarks). Suppose, as well, that the PQ symmetry has anomaly ∂_μ j^μ_PQ = N 32 π^2 F F̃. Then we can define an anomaly free current, j̃^mu_PQ by subtracting off a non-conserved current with the same anomaly: j^5 μ = N( u̅γ^μγ^5 u + d̅γ^μγ^5 d) So now: ∂_μj̃_PQ^μ = N( m_u u̅γ^5 u+ m_d d̅γ^5 d). In computing the axion mass, one sometimes makes a different choice<cit.>, so that the divergence of the current does not have matrix elements between vacuum and the single pion state, but this is not convenient for the domain wall problem, where one needs to consider a finite axion field range. We can explore the effect of finite transformations generated by Q̃_PQ, U(α) = e^i αQ̃_PQ. Under this transformation the quark mass terms are not invariant; these transform as: m_u u̅ u + m_d d̅ d → ( m_u u̅ u + m_d d̅ d) cos ( Nα) + (m_u u̅γ_5 u + m_d d̅γ_5 d)sin( Nα) . Now consider the chiral lagrangian. Our goal is to integrate out the pion fields. We do this by replacing u̅ u and d̅ d by their expectation values as functions of the pseudogoldstone fields, and solving for the minimum of the π⃗ potential as a function of a = α f_a. Switching to a two component notation, and letting f,g denote flavor indices: ψ̅(x)_f ψ(x)_g = ⟨ψ̅(0) ψ(0) ⟩ ( e^i π⃗·σ 2 f_π )_fg We only have to solve for π_0. A particularly simple case is that of m_u = m_d. Then the potential is independent of π_0: V(a) = -m_π^2 f_π^2 cos( Na f_a ) = -f_a^2 m_a^2 N^-2cos( Na f_a ) . The presence of domains for N± 1 is manifest; the potential has N degenerate minimima with a f_a = 2 π k N; k=0,…, N-1, and the existence of domain wall solutions follows. The domain wall solution can be written down explicitly; it is the static soliton of the Sine-Gordan theory: a(x) = f_a (4arctan (e^m_a x) N + 2 π k N ). The tension of the domain wall satisfies: T ∝ f_a^2 m_a ∝f_a m_π f_π. It m_u m_d, there is an additional contribution from the pion fields proportional to δ T ∝m_u -m_d m_u + m_df_a m_π f_π. So, in general, the pions make an order one contribution to the tension. In the final collapse of the domains, this will be associated with production of energetic hadrons. § DOMAIN WALL COSMOLOGY Domain walls, if they come to dominate the energy density of the universe, are problematic<cit.>. The domain wall energy density decreases as 1/R, so it can quickly overwhelm the density of radiation or matter, falling as 1/R^4 or 1/R^3, respectively. So it is necessary that there either never were domain walls at all, or that they disappear relatively quickly, typically by times of order a few seconds after the big bang. Reference <cit.> considers, in addition to the two proposed solutions we mentioned earlier, a third possibility, that of a biased domain wall distribution, but rules it out. They then argue that the constraints associated with PQ violating operators have been underestimated. We will address their critique shortly. Reference <cit.> analyzed explicit PQ symmetry violation, tilting the axion potential and causing all but one type of domain to collapse. Suppose the splitting between states is: Δ V = ϵ 10^-10 m_π^2 f_π^2. This corresponds to a potential for the axion roughly of the form: Δ V(a) = ϵ  10^-10 m_a^2 f_a  a. ϵ cannot be extremely small if the domain wall system is not to dominate the energy of the universe before it disappears. When the temperature is of order f_π, the domain walls form. The corresponding Hubble parameter is H_0 = m_π f_π M_p. Calling the corresponding scale factor R_0, the domain wall density is subsequently of order: ρ_DW = (f_a m_π f_π) m_π f_π M_pR_0 R. So domain walls dominate when (R_0 R )^3 ≈f_a M_p or (R_0 R ) ≈ 10^-2 ( f_a 10^12 )^1/3. This corresponds to a temperature of order 1 MeV, or times of order 10 seconds. How quickly the domains collapse is the subject of the next section. §.§ Fate of the Domain Walls As noted in the literature, when the walls collapse, their energy can be converted to kinetic energy of the domain walls, to axions, gravitational waves, electromagnetic radiation, and possibly other types of matter or radiation. We will shortly argue that, before collapse, the energy is principally converted to kinetic energy of the walls, followed by highly relativistic axions. At the final collapse, this energy is converted to extremely relativstic axions. Gravitational and electormagnetic radiation are minor components of the energy budget. These axions would still be highly relativistic today. When formed, the bubbles have radius, r_0, of order: r_0 ≈M_p m_π f_π. Initially their acceleration due to Δ V is slightly less than that due to the Hubble expansion, H: a ∼Δ V f_a f_π^2≈ϵ 10^-22 f_π^2;  H ∼ 10^-19 f_π^2. They become comparable when the temperature decreases by a factor of order 10^2. beyond this point we can, to first approximation, neglect the expansion. The velocity becomes of order one in less than a Hubble time, and collapse occurs in such a time. We can then ask: if we ignore gravitational radiation, what is the velocity of the domain wall once the domain shrinks to a microscopic size (more precisely, how large is the Lorentz γ factor for the wall. We take as the initial time the time when the cosmic acceleration is equal to the acceleration of the wall: H_0 = Δ V f_π^2 f_a≈ϵ× 10^-24  GeV. The energy stored in a horizon sized region of one of the excited states is: E ≈Δ V H_0^-3∼ϵ 10^58  GeV. If there is no emission of axions, gravitational or other radiation during the collapse, then once the region size is of order m_a^-1, the γ factor is enormous. The effective mass of the system is of order m_eff= f_a f_π^2 m_a^-2∼ 10^38  GeV, so γ∼ϵ 10^20. We now argue that in fact most of the energy gained is transferred to kinetic energy of the domain wall. Initially the domain is large and the curvature of the domain wall is negligible on macroscopic scales. To consider axion radiation, it is helpful to work in the instantaneous rest frame of the domain wall (more precisely, of a macroscopic segment of the wall). We can define what we mean as axion radiation by considering a set of domain walls, instantaneously at rest, described by a classical field configuration, ϕ(x⃗) = ϕ_cl(x⃗-x⃗_0). Were it not for the symmetry-breaking potential, Δ V, these configurations would be solutions of the equations of motion if x⃗_0 = v⃗ t. But due to the potential, they are not. Axion radiation corresponds roughly to the difference, δϕ, of the actual axion field and the would-be domain wall configuration at a given time. This is proportional to δϕ(x⃗,t) = Cẍ_0(t)_i ·∂_i ϕ_cl (x⃗,t). It would be challenging to compute the energy carried by δϕ, but it's form, for non-relativistic motion, can be determined by simple considerations. The energy should be rotationally invariant, translationally invariant, and time reversal invariant (up to very small effects within the Standard Model coupled to an axion). So the energy per unit time transferred to δϕ behaves as: E = B f_a^2 |ẍ_0^2 | A where A is the area of the domain wall, and B is an order one constant. We can write ẍ_0 in terms of Δ V and the tension of the domain wall, ẍ_0 = Δ V f_π m_π f_a. Correspondingly, in a frame boosted with Lorentz factor γ, the energy per unit time is increased by a factor of γ for the energy, but decreased by a factor of order γ from the time dilation, so the transformation between the two frames behaves as (γ)^0. A little more precisely, we might consider radiation in a time interval Δ t in the instantaneous rest frame of the wall. Δ t should be such that the wall is non-relativistic in that frame. For the element of area A, there will be movement of order Δ t × v in this time period. The corresponding time elapsed in our observers frame is: Δ t^' = γ (Δ t +v Δ z) but the second term is, by assumption, much smaller than the first and can be neglected. We want to compare the energy radiated by the wall per unit time with the energy the wall acquires per unit time from Δ V. This is Δ V A, so the condition that radiation is comparable to the energy increase of the wall per unit time due to Δ V is Δ V f_π^2 m_π^2 >1 which is never satisfied. In other words, the energy radiated in axions as the walls collapse is negligible. Similar considerations lead to suppression of electromagnetic and gravitational radiation. So the walls are extremely relativistic when they finally shrink to microscopic size. At this point, one expects that the collapse results in production of extremely relativistic axions, with γ factors comparable to those we discussed above, and very relativistic hadrons. The hadrons quickly thermalize with the background hadrons. The number of axions produced would be small, consistent with the fact that the final domains are microscopic in size. As we explain below, very little of the axion energy would be degraded in collisions with hadrons. In any case, provided that the domain walls don't dominate the energy density of the universe at the time of their collapse, their cosmological effects would be minor. §.§ The Final Stage of Domain Wall Collapse Because of the enormous γ factors of the axion produced in the decay of a domain, most of these axions stream through the universe. Only rarely does one interact with quarks, gluons. or other axions. For s-channel processes, the mean time between collisions is long. Even assuming an order one coupling of these high energy axions to nucleons, δ L = a N̅γ_5 N, the mean free time for axion collisions with nucleons is τ∼ 10^10γ m_a T^-3m_N ∼ 10^33 (γ 10^20 ) ( MeV T )^3 (m_a 10^-14 ) in GeV units, and we have taken n_B n_γ≈ 10^-10. This is an enormous time, comparable to even the current age of the universe. As we have noted, the γ factor is actually likely to be serval orders of magnitude larger than 10^25, giving further suppression. For t-channel exchange, due to the enormous γ factor, scattering is appreciable only at extremely small angles. As a result, the mean scattering time is, again, extremely large. So most of these axions are still around today, and are highly energetic. Their numbers, however, are not large, and their contribution to the energy budget of the present universe is smaller than the photon contribution (assuming that the domain wall contribution was always a small fraction of the total energy density at the time of collapse). Interactions with ordinary matter are very rare. § ANTHROPIC CONSIDERATIONS As we have stressed, there is one troubling feature of the Peccei-Quinn symmetry for these relatively low f_a axions in the suppression required for a Peccei-Quinn symmetry of sufficient quality to solve the strong CP problem, and another in the requirement of sufficient tilt solution to solve the domain wall problem. One might, indeed, argue that the requirements to obtain a PQ symmetry of sufficient quality to solve the strong CP problem are implausible, For example, if the result of a discrete symmetry, the symmetry must be very large<cit.>. On the other hand, we have seen that to avoid a domain-wall dominated universe, the tilt must be just barely smaller than that required for the PQ symmetry. One might be inclined to discard axions with these relatively low decay constants, but it is also tempting to ask whether such bizarre constraints might be satisfied as a result of anthropic considerations. In this section, we will examine this possibility. We do not attempt to understand precisely how, within some sort of landscape, this might be realized in detail. Rather we ask the simpler question: might the existence of observers be ruled out if we made a change in a single parameter, holding others fixed. If the tilt is controlled by a discrete symmetry, we might imagine that the symmetry violating terms in the axion potential have the form A M_p^-nΦ^n+4, where Φ = f_a e^i a f_a, A is now a complex constant (not the area) and Φ→ e^2 π i NΦ under the Z_N symmetry, so δ V = | A| f_a^4 ( f_a M_p )^N-4cos((Na f_a + α). In order that θ be small enough, assuming A is of order one, N must be quite large. If f_a = 10^12 GeV, we require N ≥ 13, for example. At the same time, our discussion of domain wall evolution implies that N can't be larger than 13. The lower limit on the tilt (upper limit on N) is a relatively easy one to explain anthropically. Were domain walls to dominate the energy density of the universe before their collapse, the universe would not evolve to a situation with structures of the sort we see in nature (and which support observers). The upper limit might be understood if we assume that a dark matter density close to that observed is necessary for (or perhaps optimizes the number of) observers. Write the tilt contribution to the axion potential as δ V(a) = ϵ 10^-10 m_π^2 f_π^2 cos(a f_a + α). We require ϵ < 1. Suppose ϵ was much larger, corresponding to N=11, for example, and ϵ∼ 10^4. Then the axion mass receives a larger contribution from δ V then from QCD, and it starts to oscillate when the temperature is of order 10^5 GeV. As a result, there are far too few axions to constitute the dark matter. Axions only account for about 10^-20 of the energy density initially. At temperatures of order 1 eV, they are only 10^-6 of the total energy of the universe. The case of n=12 is different. The contribution, at zero temperature, to the axion mass from δ V is smaller than that from QCD, but we have to investigate the behavior of the system as a function of temperature. The axion mass as a function of temperature behaves as<cit.>: m_a(T) ≈ m_a (Λ_QCD T )^3.7. Without δ V, the axion begins to oscillate when 3 H = m_a(T). This corresponds to a temperature of order 10 GeV. But m_a(T) < δ m_a down to a smaller temperature, of order 0.2 GeV if we take the formula <ref> seriously at these temperatures. So there is significant overproduction of dark matter. All of this is meant to establish that there is a plausible anthropic rationale for the size of δ V being such that axions constitute the dark matter, and the domain wall system collapses “just in time". We stress again that we don't have a detailed cosmological picture for how this might be implemented. § CONCLUSIONS In this paper we have adopted the conventional view of axion cosmology, that the PQ transition occurred after inflation and that the post inflationary universe was once at a temperature well above the scales of QCD, and focused on the resulting question of domain walls. We have recalled that the problem is generic, and admits only a small number of solutions. Among these, we have considered the effects of small, explicit breaking of the symmetry. We have recalled the well-known issue that the constraints of obtaining small enough θ and domain wall annihilation before domain wall dominance restrict the symmetry breaking to a narrow range. We have pointed out that it would be almost absurd for the size of this breaking to be a consequence of discrete symmetries. We have noted that, as for other seemingly absurd phenomena actually observed in nature, one might contemplate an anthropic solution. This has the virtue that it could explain the remarkable coincidences required. But most of our attention has been devoted to the fate of the universe in such a picture. We have studied the question of where the energy in the domain walls goes. We have noted that gravitational radiation leads to a limiting Lorentz factor, γ, which in turn implies that the vast majority of the domain wall energy is converted to gravitational waves. This is in contrast to the possibility that much of the energy ends up in non-relativistic, dark matter axions<cit.>. The resulting constraints are mild, in the sense that they are not much stronger than the requirement that the domain wall energy not dominate the energy density of the universe before annihilation. § ACKNOWLEDGMENTS We thank Patrick Draper, Guido Festuccia and Pierre Sikivie for conversations and critical comments. This work was supported in part by U.S. Department of Energy grant No. DE-FG02-04ER41286. JHEP
http://arxiv.org/abs/2307.04638v2
20230710153526
DeePTB: A deep learning-based tight-binding approach with $ab$ $initio$ accuracy
[ "Qiangqiang Gu", "Zhanghao Zhouyin", "Shishir Kumar Pandey", "Peng Zhang", "Linfeng Zhang", "Weinan E" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.comp-ph" ]
fnnumber
http://arxiv.org/abs/2307.05644v1
20230711104012
Lambert W random variables and their applications in loss modelling
[ "Meelis Käärik", "Anne Selart", "Tuuli Puhkim", "Liivika Tee" ]
stat.ME
[ "stat.ME", "62P05", "G.3" ]
Lambert W random variables and their applications in loss modelling Meelis KÄÄRIK, Anne SELART, Tuuli PUHKIM, Liivika TEE Institute of Mathematics and Statistics, University of Tartu Narva mnt 18, 51009 Tartu, Estonia e-mail: [email protected] Abstract. Several distributions and families of distributions are proposed to model skewed data, think, e.g., of skew-normal and related distributions. Lambert W random variables offer an alternative approach where, instead of constructing a new distribution, a certain transform is proposed <cit.>. Such an approach allows the construction of a Lambert W skewed version from any distribution. We choose Lambert W normal distribution as a natural starting point and also include Lambert W exponential distribution due to the simplicity and shape of the exponential distribution, which, after skewing, may produce a reasonably heavy tail for loss models. In the theoretical part, we focus on the mathematical properties of obtained distributions, including the range of skewness. In the practical part, the suitability of corresponding Lambert W transformed distributions is evaluated on real insurance data. The results are compared with those obtained using common loss distributions. Keywords: asymmetry, skewness, loss distributions, non-life insurance, probability distributions, Lambert W function. § INTRODUCTION tocsectionIntroduction Loss modelling is an essential part of actuarial and financial mathematics. In the past, several distributional models are applied, and the increasing volume of data and computational power motivate using even more complex distributions to fit the data. The data in the actuarial and financial fields is usually skewed. Several classical distributions can be used to fit skewed data; moreover, a unified approach for skewing symmetric distributions is introduced by <cit.>, where the shape of the normal distribution is deformed by a certain skewness parameter. Similarly, other asymmetric distributions (e.g., skew t-distribution) have been developed <cit.>. A review of applications of skew-elliptical distributions in actuarial and financial mathematics is given in <cit.>. In <cit.>, another method of generating skewness was introduced through Lambert W function, that applied to symmetric distributions, can produce skewness and a heavy tail. In addition, the Lambert W random variables can be seen as a generalization because the input distribution can be arbitrary, not necessarily symmetric. Instead of parametric manipulation of the original symmetric density function to introduce skewness, the random variable itself is transformed using the Lambert W function. The Lambert W function has been proved useful in mathematics, physics, chemistry, biology, engineering, risk theory and other fields but has been less used in statistical modelling. The Lambert W function is used to derive the exact distribution of the likelihood ratio test statistic in <cit.> and has also been used in more recent work such as <cit.>, amongst others. The approach of modelling the skewed random variables and symmetrizing data using the Lambert W function as a variable transformation is used in <cit.>. We will use <cit.> as the basis of the construction of this paper. The paper is organized as follows. In the first section, we give a short overview of Lambert W function. In Section 2, general definitions and expressions of the cumulative density functions and probability density functions of Lambert W random variables are introduced, followed by more detailed results about Lambert W normal and exponential distributions. In the last section, we describe the results of fitting the Lambert W normal and exponential distributions to two insurance-related data sets and compare the fit with the number of typical insurance models. Proofs of some properties, technical details of estimation, and additional figures of fitted distributions are presented in Appendix. § LAMBERT W FUNCTION AND ITS PROPERTIES In the following, we define the Lambert W function and give a brief overview of its properties. We refer to <cit.> for more details on the topic. Lambert W function is a set of inverse functions for the following function: f(x')=x'e^x' (x' ∈ℝ). In other words, x'=f^-1(x'e^x')=W(x'e^x'). Substituting x = x'e^x' leads us to the definition of Lambert W function. Lambert W function W(x) is defined by the following equality W(x)e^W(x)=x, x ∈[-1/e,∞). Note also that, in general, the function W(x) can be defined for real or complex arguments, and Equation (<ref>) has infinitely many solutions, most of which are complex. Following the notation by <cit.>, we denote the different branches of the function by W_k(x), where the branch index k ∈{0,± 1, ± 2, …} and x ∈ℂ. For real x, all branches besides W_0(x) and W_-1(x) are complex. For x ∈(-∞,-1/e), the equation has only complex solutions. We denote the branch corresponding to W(x) ≥ -1 by W_0(x) and call it the principal branch, and the branch corresponding to W(x) ≤ -1 by W_-1(x) and call it the non-principal branch. Some of the characteristic properties of the function are (see also Figure <ref>): * W(0)=0, * W_0(-1/e)=W_-1(-1/e)=-1, * W(e)=1, * W(1)=e^-W(1)=ln(1/W(1))=-ln W(1) ≈ 0.5671433, * lim_x → 0- W_-1(x) = -∞, * lim_x →∞ W_0(x) = ∞. Based on the construction as an inverse of a certain exponential function, the asymptotes of W are similar to those of the natural logarithm. More precisely, one can find the limits as follows: lim_x →∞W_0(x)/lnx= lim_x →∞xW_0(x)/x(1+W_0(x))= lim_x →∞1/1/W_0(x)+1=1 and lim_x → 0-W_-1(x)/ln(-x)= lim_x → 0-xW_-1(x)/x(1+W_-1(x))= lim_x → 0-1/1/W_-1(x)+1=1. At the same time, the absolute difference between Lambert's W function and natural logarithm, | W_0(x)-lnx|, goes to infinity for x →∞ <cit.>. § LAMBERT W RANDOM VARIABLES §.§ Definitions Next, we present the definitions of different types of Lambert random variables based on <cit.>. We also give the formulae of the cumulative distribution function (cdf) and probability density function (pdf) for scale and location-scale random variables. Let U be a continuous random variable with cdf F_U(u)=ℙ(U ≤ u), u ∈ℝ and pdf f_U(u), then Y:=Uexp(γ U), γ∈ℝ, is noncentral, nonscaled Lambert W × F_U random variable with skewness parameter γ. The skewness parameter γ can take any value on the real line, but as the exponential function is always positive, the transformation (<ref>) preserves sign. If γ=0, then Y=U. The effect of transformation on the shape of the distribution depends on the original variable U. If U has both positive and negative values, then positive γ folds back the tail with negative values at point -1/γ, thus relocating part of negative U values, and on the positive side moves values further away, making the right tail heavier. Negative γ acts the other way around. Note also that for a skewed U, the Lambert W transform can produce a more symmetric random variable. The transformation given with (<ref>) is not scale or location invariant. To keep these properties, we must include the transformed variable's location and scale parameters in the definition. Let X be a continuous random variable from a location-scale family with cdf F_X(x|β), where β is the corresponding parameter vector. Let U=X-μ/σ be the zero-mean unit-variance version of X. Then Y:={Uexp(γ U)}σ+μ, γ∈ℝ,  σ>0, is location-scale Lambert W × F_X random variable with parameter vector (β,γ). If γ >0, the location-scale Lambert W × F_X random variable takes values in interval (μ- σ/γ e, ∞). For negative γ, on the contrary, Y has an upper bound, and values are in interval (-∞, μ - σ/γ e). For γ>0, the cdf and pdf of a location-scale Lambert W× F_X random variable are F_Y(y|β,γ)= 0, if y≤μ-σ/γ e , F_X(.W_0(γ z)/γσ+μ|β) -F_X(.W_-1(γ z)/γσ+μ|β), if μ-σ/γ e < y <μ , F_X(.W_0(γ z)/γσ+μ|β), if y ≥μ, and f_Y(y|β,γ)= 0, if y≤μ-σ/γ e, f_X(.W_0(γ z)/γσ+μ|β)W'_0(γ z)/γ -f_X(.W_-1(γ z)/γσ+μ|β)W'_-1(γ z)/γ, if μ-σ/γ e < y <μ , f_X(.W_0(γ z)/γσ+μ|β)W'_0(γ z)/γ, if y ≥μ, where z=y-μ/σ and we denote the derivative of W(γ z) by z as W'(γ z) = dW(γ z)/dz=exp(-W(γ z))/1+W(γ z)γ =W(γ z)/ z(1+W(γ z)). In (<ref>), the principal and non-principal branches are not distinguished as it holds for both. The derivation of these expressions can be found in <cit.>. For γ<0, the derivation and resulting expressions are similar, but the three regions considered are pivoted: the first region is y≤μ where only the principal branch is used, the second region is μ<y< μ-σ/γ e where both branches are used, and for last region y≥μ-σ/γ e where the cdf has reached 1, and the pdf is equal to 0. For a non-negative X from the scale family, we can define the corresponding scale-family Lambert random variable as follows. Let X be a non-negative continuous random variable from a scale family with cdf F_X(x|β), where β is the parameter vector. Let U=X/σ be the unit-variance version of X. Then Y:={Uexp(γ U)}σ = Xexp(γ X/σ), γ∈ℝ,  σ>0, is scale Lambert W × F_X random variable with parameter vector (β,γ). If γ>0, the cdf and pdf for a scale Lambert random variable are easily found as the transformation (<ref>) takes values only on the positive side of the real line, and we apply the transformation W also on positive arguments, so only primary branch plays a role. Hence the cdf has the following form: F_Y(y|β,γ)= 0, y < 0, F_X(.W_0(γ y/σ)/γσ|β), y ≥ 0. Taking the derivative of (<ref>), we get the following form for the pdf f_Y(y|β,γ)= 0, y < 0, f_X(.W_0(γ y/σ)/γσ|β)exp(-W_0(γ y/σ))/1+W_0(γ y/σ) y ≥ 0. Our primary focus is on positive γ that produces a heavier right tail to right-skewed distribution, possibly making the distribution more suitable for describing insurance losses. Yet, the results for γ<0 are not as straightforward as they were for the location-scale family case. Thus, to complete the theory, we analyze this situation as well and derive the cdf and pdf. First the cdf F_Y(y) =ℙ(Y≤ y)=ℙ(Uexp(γ U)σ≤ y) =ℙ(γ Uexp(γ U)≥γ y/σ) = 1 - ℙ(γ Uexp(γ U)≤γ y/σ). Now, as argument γ y/σ is negative for y>0, both branches are needed as we apply the Lambert function. Hence F_Y(y) = 1- ℙ(W_-1(γ y/σ)≤γ U ≤ W_0(γ y/σ)) = 1- ℙ(W_-1(γ y/σ)/γ≥ U ≥ W_0(γ y/σ)/γ) = 1- F_X(.W_-1(γ y/σ)/γσ|β) + F_X(.W_0 (γ y/σ)/γσ|β). At point y=-σ/γ e principal and non-principal branches are equal, so this is the point where the cdf F_Y reaches 1. In summary, if γ<0 F_Y(y|β,γ)= 0, if y≤ 0 , 1- F_X(.W_-1(γ y/σ)/γσ|β) +F_X(.W_0(γ y/σ)/γσ|β), if 0 < y <-σ/γ e , 1, if y ≥ -σ/γ e and the corresponding pdf is f_Y(y|β,γ)= 0, if y≤ 0 or y ≥ -σ/γ e, f_X(.W_0 (γ y/σ)/γσ|β)exp(-W_0(γ y/σ))/1+W_0(γ y/σ) -f_X(.W_-1(γ y/σ)/γσ|β)exp(-W_-1(γ y/σ))/1+W_0(γ y/σ), if 0 < y <-σ/γ e. §.§ Lambert W normal distribution In this section, we apply the Lambert location-scale transformation (<ref>) on a normal random variable X∼ N(μ, σ). The resulting random variable Y = X-μ/σexp(γX-μ/σ)σ + μ. is a Lambert W× N(μ, σ) random variable with parameter vector (μ, σ, γ). Without the loss of generality, we assume that the skewness parameter γ is positive; the situation is mirrored for negative γ-s (left skew instead of right skew). Using (<ref>), the cdf for a positive skewness parameter γ can be written as F_Y(y|μ,σ,γ)= 0, if y≤μ -σ/γ e , Φ(W_0(γ z)/γ) -Φ(W_-1(γ z)/γ), if μ -σ/γ e < y <μ, Φ(W_0(γ z)/γ), if y ≥μ, where z = y-μ/σ and Φ is the standard normal cdf. Likewise, using (<ref>), we get the pdf for γ>0 as f_Y(y|μ,σ,γ)= 0, if y≤μ -σ/γ e, f_0(y-μ/σ)-f_-1(y-μ/σ), if μ -σ/γ e < y <μ , f_0(y-μ/σ), if y ≥μ, where f_0(z) and f_-1(z) are the components of the pdf corresponding to the principal and non-principal branch, respectively: f_0(z) = 1/√(2π)exp(-(W_0(γ z))^2/2γ^2)exp(-W_0(γ z))/1+W_0(γ z), f_-1(z) =1/√(2π)exp(-(W_-1(γ z))^2/2γ^2)exp(-W_-1(γ z))/1+W_-1(γ z). Some examples of cdf and pdf for Lambert W× N(0,1) distribution with γ>0 are shown in Figure <ref>. In the following, we give some results that describe the behaviour of the pdf of a Lambert W normal random variable. To keep the proofs technically cleaner, the analysis is applied to Lambert W× N(0,1) random variables; generalization to Lambert W× N(μ,σ) is straightforward. Proofs of these lemmas are presented in Appendix <ref>. The pdf of a Lambert W× N(0,1) random variable Z, f_Z, has an asymptote at -1/γ e: lim_z→-1/γ e f_Z(z) = ∞. The point -1/γ e where f_Z has an asymptote can be thought of as a point where the transformation folds the left tail of N(0, 1) and fits it into the interval (-1/γ e, 0). So at this turning point, the density piles up, see Figures <ref>, <ref>, <ref>, <ref>, for example. Although the transformation squeezes the negative values of N(0, 1) into a fixed interval and makes the right tail heavier, it still has zero as a point where the probability mass is divided into equal halves. Also, at the point z=0, the pdf f_Z equals to the pdf of N(0, 1), i.e. f_Z(0)=1/√(2π). This property is pointed out in the right panels of Figures <ref> and <ref>. The principal branch component of the pdf of a Lambert W× N(0,1) random variable, f_0, has the following properties. The function f_0(z) a) has two local extrema (maximum and minimum) if γ∈ (0,√(2)-1); b) is monotone decreasing if γ > √(2)-1. The non-principal branch component of the pdf of a Lambert W× N(0,1) random variable, f_-1, has the following properties. The function f_-1(z) a) is monotone increasing (to 0) if γ∈ (0,√(2)+1); b) has two local extrema (maximum and minimum) if γ > √(2)+1. Consequently, depending on the value of skewness parameter γ, we can distinguish three main shapes of the pdf of Lambert W normal random variables. First, if γ∈ (0, √(2)-1), the pdf has two local extrema due to the principal branch component f_0, see Figure <ref> or the left panel of Figure <ref>, for example. Secondly, if γ∈ [√(2)-1,√(2)+1], the pdf is strictly decreasing function of z, as in the right panel of Figure <ref>. Thirdly, if γ > √(2)+1, the pdf again has two local extrema, now due to the non-principal branch component f_-1, and compared to the first case, the overall shape of pdf is different as seen in Figures <ref> and <ref>. In these two figures, the right panel gives a more detailed view of the interval where the maximum is placed. As one can see, especially in Figure <ref>, the sharp peak turns out to be quite smooth if looked more closely. Lastly, we give the expressions of moments and skewness coefficient of Lambert W× N(μ, σ) random variable. The moments of Lambert W× N(0,1) random variable can be found using the moment generating function (mgf) of the underlying standard normal distribution. Let Z be Lambert W× N(0,1) random variable then the moments for Z are the following <cit.>: E(Z^k)=1/k^k∂^k/∂γ^k M_N(0,1)(γ k)= 1/k^k∂^k/∂γ^kexp(γ^2k^2/2), where M_N(0, 1) denotes the mgf of N(0, 1). For the general case, i.e. for Lambert W× N(μ, σ) random variable Y, we can use the properties of location-scale family, so E(Y^k) = E((Zσ+μ)^k). As the moments are found using derivatives of an exponential function, the moments of any order k exist and are finite. Using the expressions given above, we can derive the formulae for the mean of Y EY = μ+σγ e^γ^2/2, the variance of Y DY=σ^2 e^γ^2(e^γ^2(1+4γ^2)-γ^2), and the skewness coefficient γ_1(Y): γ_1(Y)= γ(e^3γ^2 (9+27γ^2)-e^γ^2(3+12γ^2)+2γ^2/(e^γ^2(1+4γ^2)-γ^2)^3/2). The skewness coefficient is a monotone function of γ, with the same sign. As γ→±∞, also γ_1(Y) →±∞, and the speed of growth is exponential. For example, if we look at the range of values γ∈(√(2)-1; √(2)+1), where the pdf is monotone decreasing, the skewness coefficient grows from around 3 to 20000, see Figure <ref>. §.§ Lambert W exponential distribution Let X be an exponentially distributed random variable with parameter λ>0 (λ as rate). Then the transformed random variable Y = X e^γλ X has Lambert W× Exp(λ) distribution with parameter vector (λ,γ). According to (<ref>), for positive γ, the cdf of Y is F_Y(y|λ,γ)= 1-exp(- W_0(γλ y)/γ), y≥0, and, using (<ref>), the pdf of Y is f_Y(y|λ,γ)= λexp(- W_0(γλ y)/γ)exp(-W_0(γλ y))/1+W_0(γλ y), y≥0. For γ <0, the expressions for cdf and pdf also involve the non-principal branch of the Lambert W function as was seen in (<ref>) and (<ref>): F_Y(y|λ,γ)= 1-exp(- W_0(γλ y)/γ) + exp(- W_-1(γλ y)/γ) , 0≤ y < -1/eγλ, and f_Y(y|λ,γ)= λexp(- W_0(γλ y)/γ)exp(-W_0(γλ y))/1+W_0(γλ y) - λexp(- W_-1(γλ y)/γ)exp(-W_-1(γλ y))/1+W_-1(γλ y) 0≤ y < -1/eγλ. For examples of pdf and cdf for the Lambert W× Exp(1) distribution, see Figures <ref> and <ref>. As seen from Figure <ref>, compared to the exponential distribution the Lambert random variables have a heavier tail in the case of positive γ. For negative γ values (see Figure <ref>), the random variable Y takes values in the fixed interval (0, -1/eγλ) as the transformation relocates the larger values of underlying exponential random variable X. It can be argued that this kind of transformation is not relevant for typically heavy-tailed insurance data, but our example (see Tables <ref> and <ref>) shows an adequate fit by the Lambert W exponential random variables with γ<0 for log claims of Danish fire loss data. In the case of γ <0, if the absolute value of γ is small, it produces a distribution with a suitably large cut-off point to fit data with moderate tails as the Danish log claims data is. Similarly, for positive γ, only small values of γ are of practical use, as the tail becomes heavy very quickly. For example, if γ≥ 1 Lambert W× Exp(λ) random variables do not have a finite first moment. For γ <1, the first moment is 1/λ(1-γ)^2. In general, the following expression holds: EY^k = k!/λ^k(1-kγ)^k+1, γ < 1/k. The skewness coefficient for a Lambert W× Exp(λ) random variable with γ<1/3, can be calculated as γ_1(Y) = 2√((1-2γ)^9/(2γ^4-2γ+1)^3)(3(1-γ)^4((1-γ)^2(1-2γ)^3 - (1-3γ)^4)/(1-3γ)^4(1-2γ)^3+1). If γ≥1/3, the third moment of Y is infinite, and the coefficient γ_1(Y) can not be found. The skewness coefficient is a non-monotonic function of γ, see also Figure <ref>. If γ=0, the distribution simplifies to exponential, and the skewness coefficient γ_1 = 2. For a Lambert W× Exp(λ) distribution, the skewness coefficient γ_1 can exceed value 2, and it approaches infinity as γ→1/3. For values -∞<γ<-1, the skewness coefficient is decreasing function of γ with minimum value -9√(15)/50, and for γ∈(-1,1/3), it is increasing (see Figure <ref>). § FITTING LAMBERT W RANDOM VARIABLES TO INSURANCE DATA In this subsection, we fit the Lambert W normal and exponential random variables on two well-known datasets, the US indemnity data introduced in <cit.> and Danish fire data introduced in <cit.>, and compare the fit with previous results. These data sets have been widely used in field-specific literature before: see, e.g., <cit.> for US indemnity data, or <cit.> for Danish fire data, among others. A consolidated overview of the results is given in <cit.>. To recall the distributions of these example data sets, see Figure <ref> for US indemnity data and Figure <ref> for Danish fire loss data. In both figures, the left panel presents the data on the original scale (thousands of USD for US indemnity and millions of DKK for Danish fire data) and the right panel is the same data after log transformation. In the case of log-transformed data, we use a similar shift as in <cit.> to keep results comparable. More precisely, the transformation ln(y) - min(ln(y)) + 10^-10 is applied on the original variable y. As one can see, both data sets are very skewed on the original scale. For Danish fire data, the skewness is more extreme as given by the skewness coefficient γ_1=18.74 compared to γ_1=9.15 for US indemnity data. In the case of US indemnity data, the log-transformed data produces an almost symmetric histogram, very similar to normal distribution. The log-transform also reduces skewness for Danish fire data, but the result is still skewed, with γ_1=1.76. In <cit.>, 19 distributions are fitted to the two aforementioned data sets, with the result that skew-normal and skew t distributions are reasonably competitive compared to other models commonly used for insurance data. In our research, we will follow this construction and include all fitted continuous distributions, also adding three more distributions to that list: Lambert W normal and exponential distributions as our main contribution, and the Pareto distribution that was missing due to technical problems. We use the maximum likelihood method for parameter estimation, as was done in <cit.>. For more details of the estimation process, see Appendix <ref>. To compare these models with the competitors, we measure the goodness of fit between data and distribution by the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). BIC is included because the number of parameters of distributions ranges from 1 to 5, making the penalty of AIC quite small as compared to the flexibility additional parameters can give. Before the comparison, let us first look at the parameter estimates of Lambert W distributions in Table <ref>. What is interesting in the case of Lambert W exponential model, is the negative γ estimate for the Danish log data, as this produces distribution with the upper bound -1/γ e λ, here evaluating to 7.82. As the maximum value in data is around 5.57, this model allows yet higher claim values as seen in the data. Also, this model suited the data well, as it ranked high according to BIC value (see Table <ref>, discussed later on). For the same data on the original scale, which was highly skewed, the fit was not good though and the γ estimate 0.096 could be considered unexpectedly low. The opposite holds for US indemnity data, as the γ estimate 0.496 would give an infinite skewness coefficient. At the same time, the fit with data according to BIC is good relative to other models (see Table <ref> and comments later). As the US indemnity data on the log scale is near normal, the Lambert W exponential is not a really suitable model here. But the estimate γ̂=-0.321 responds to skewness coefficient value 0.09 ie this model is still able to pick up the symmetry of the data. From Lambert W normal parameter estimates we can point out that for both data sets on the original scale, the γ estimates are in the interval that produces monotone decreasing pdf. For US indemnity data on the log scale, the γ estimate -0.021 produces a distribution very similar to normal and this is in agreement with what we saw in the histogram. For the Danish log data, the estimate for γ is 0.373 that is in the interval (0, √(2) - 1) and corresponds to the pdf shape with some downward bend between the asymptote and maximum as in the left panel of Figure <ref>. As shown in the following analysis, the fit provided by Lambert W transformed random variables is promising. The results of model fitting are presented in Tables <ref> and <ref>. The distributions are sorted in ascending order by the number of parameters, but the two newly added Lambert W distributions are kept at the top of the table. In every column, the first three results are marked: the best result is in bold, the second best is underlined, and the third best is underlined and in italics. In the case of US indemnity data (see Table <ref>), we saw earlier that the log-transformed data resembled closely the normal distribution. Therefore it is expected that log-normal distribution gives the best fit for data in the original scale. But the Lambert W exponential model also gives a good fit, with the second-best AIC and BIC values. For log-transformed data, the two smallest AIC values are almost equal, with the following block having very close values. So skew-normal and Lambert W normal distributions share the first place and skew t follows at the top of the next block. Based on BIC, normal distribution, having fewer parameters than skew-normal or Lambert W normal gives the best fit. Skew-normal and Lambert W normal distribution fall to second and third place. The pdf-s for the best three models with data histograms are plotted in Figure <ref> in Appendix <ref>. As one can see, the pdf-s of the best three models are very close, the main difference being the region of small claims in the original scale. For the log-transformed data, the three curves practically coincide. From Table <ref> we can see that for the Danish fire data on the original scale, the two best-fitting models are skew t and the Lambert W normal distribution. For Danish log data, the Lambert W normal distribution again has the best fit, followed by skew t based on AIC. Based on BIC, the best model is Lambert W normal distribution, and Lambert W exponential has the second best result. Also, for illustration see Figure <ref> in Appendix <ref>. The resulting three best pdf-s on the original data are very similar. On the log-transformed data, the discrepancies are not big either, but more clearly visible. In conclusion, the Lambert W models give a good fit to both the original and log-transformed data. § SUMMARY The paper addresses the Lambert W transform-based approach and the properties of resulting distributions. Lambert W normal and Lambert W exponential distributions were investigated more thoroughly. The skewness is introduced via Lambert W transform and the skewness parameter γ. Without the loss of generality, we focus on positive γ-s that are more of interest in loss modelling applications. For Lambert W standard normal distribution with positive skewness parameter γ, the pdf f(y) has an asymptote at y=-1/γ e. We also established the following three regions based on the shape of the pdf: a) if γ∈ (0, √(2) - 1), the pdf has two local extrema; b) if γ∈ (√(2) - 1, √(2) + 1), the pdf is monotone decreasing; c) if γ > √(2) - 1, the pdf has two local extrema. In the first range, where γ∈ (0, √(2) - 1), the shape of the distribution is, at first glance, not the most suitable for loss modelling or needs additional explanations. Nevertheless, one can argue that the asymptote effect is reasonably small, so the distribution may still give a good fit (consider, e.g., the Danish fire log data). Such a shape might also be suitable in zero-altered models, where zero claims are included. The second, most appealing range, where the pdf is monotone decreasing, covers a wide range of the skewness coefficient values, see Figure <ref>. If γ = √(2) + 1, the skewness coefficient is about 20 000; thus, the not-very-suitable shape in the third range is not a problem for most practical applications anyway. For Lambert W exponential distribution, we established that it allows a wider choice of skewness coefficient than the exponential distribution. One additional parameter also relaxes the rigid relation of the mean and variance of the exponential distribution. These properties make the Lambert exponential distribution a promising model for insurance loss data. The results of the practical part show that the Lambert W transformed distributions, operating in a wide range of skewness, are a viable choice for insurance loss modelling. Both normal and exponential distribution-based transforms show a reasonably good fit. An especially illustrative proof of flexibility is visible in the Danish fire data, where the Lambert W normal model is high on top both for the original and log-transformed data set. Clearly, the choices of the Lambert W approach are not limited to Lambert W normal and exponential random variables. While the normal and exponential distributions seem a natural starting point for loss modelling, more distributions can offer a valuable contribution. § ACKNOWLEDGEMENTS The authors are thankful to Roel Verbelen for constructive discussions and comments on an earlier draft of the paper. § FUNDING This work was supported by the Estonian Research Council grant PRG1197. § PROOFS OF THE PROPERTIES OF LAMBERT W STANDARD NORMAL RANDOM VARIABLES In this appendix, we give the proofs of the properties of Lambert W standard normal random variables formulated in Lemmas 1–3 (in Section 2.2). Let us recall that the density f_Z(z) can be expressed as f_0(z)-f_-1(z) for z∈ (-1/γ e,0], where f_0 and f_-1 are the principal and non-principal branch component of the pdf, respectively. Let us recall the form of the principal branch component f_0(z) as specified in (<ref>): f_0(z) = 1/√(2π)exp(-(W_0(γ z))^2/2γ^2)exp(-W_0(γ z))/1+W_0(γ z), for z>-1/γ e. Looking separately at the components of this expression, it is easy to see that W_0(γ z)→ -1, (W_0(γ z))^2→ 1, and 1+W_0(γ z)→ 0+, if z→ -1/γ e+. Thus, the ratio exp(-W_0(γ z))/1+W_0(γ z) tends to infinity in the process, which implies that principal branch component f_0(z), specified in (<ref>), goes to infinity if z→-1/γ e+. Similar argumentation holds for the non-principal branch component f_-1(z). Let us first recall that, as stated in Formula (<ref>), the non-principal branch component has the following form f_-1(z) = 1/√(2π)exp(-(W_-1(γ z))^2/2γ^2)exp(-W_-1(γ z))/1+W_-1(γ z), with z∈ (-1/γ e,0]. Analyzing the components of this expression separately, one can see that W_-1(γ z)→ -1, (W_-1(γ z))^2→ 1, and 1+W_-1(γ z)→ 0- in the process where z→-1/γ e+. This implies that exp(-W_-1(γ z))/1+W_-1(γ z)→ -∞, which, in summary, results in lim_z→-1/γ e+f_-1(z) = -∞. In conclusion, since f_Z(z) = f_0(z)-f_-1(z) for z∈ (-1/γ e,0], we havelim_z→-1/γ e+f_Z(z) = ∞. The lemma is proved. Let us first note that the Formulae (<ref>) and (<ref>) differ only by the specification of the branch (W_0 or W_-1). Since most of the following argumentation holds for both branches, we don't specify the branch unless we explicitly need to. In other words, we start searching for the extrema of the function 1/√(2π)exp(-(W(γ z))^2/2γ^2)exp(-W(γ z))/1+W(γ z). To investigate the existence of extrema for different values of γ>0, we first have to take the derivative from expression (<ref>) by z (ignoring the constant in front): (exp(-(W(γ z))^2/2γ^2-W(γ z))/1+W(γ z))' =exp(-(W(γ z))^2/2γ^2-W(γ z)) (-(W(γ z))^2/2γ^2-W(γ z))'/(1+W(γ z)) -exp(-(W(γ z))^2/2γ^2-W(γ z))(1+W(γ z))'/(1+W(γ z))^2. Using Formula (<ref>), we can write (-(W(γ z))^2/2γ^2-W(γ z))' =-2W(γ z)W'(γ z)/2γ^2-γ W'(γ z) =-W(γ z)exp(-W(γ z))/γ(1+W(γ z)) -γexp(-W(γ z))/1+W(γ z) =-exp(-W(γ z))(W(γ z)+γ^2)/γ (1+W(γ z)) and (1+W(γ z))'=γ W'(γ z)= γexp(-W(γ z))/1+W(γ z). Now, substituting the results into Formula (<ref>), leads us to exp(-(W(γ z))^2/2γ^2-W(γ z))(-exp(-W(γ z)))((W(γ z))^2+(1+γ^2)W(γ z)+2γ^2)/γ(1+W(γ z))^3=0. The equality (<ref>) holds if the numerator is zero and the denominator is not. Since we assume γ>0, the denominator gives a restriction z≠ -1/γ e, which is accounted for already. In the numerator, as the value of the exponential function is positive for any fixed argument (except for z=0 for the non-principal branch that is dealt with separately), thus we need to solve the quadratic equation (W(γ z))^2+(1+γ^2)W(γ z)+2γ^2=0 with respect to W(γ z). The solution to this equation is on the form W(γ z)=-1+γ^2/2±√((1+γ^2/2)^2-2γ^2) =-1-γ^2±√(γ^4-6γ^2+1)/2. Let us look more closely at the expression under the square root that determines the number of solutions for Formula (<ref>). Solving this equation γ^4-6γ^2+1=0 with respect to γ gives γ^2=3±2√(2)⇔γ=±√(3±2√(2))= ±(√(2)±1). Because of the initial assumption γ>0, we are interested in 2 of these 4 solutions: γ^(1)=√(2)-1≈ 0.4142 and γ^(2)=√(2)+1≈ 2.4142. We have established that the pdf of a Lambert W× N(0,1) random variable has two extrema in the following regions of the skewness parameter γ: γ>√(2)+1 0<γ<√(2)-1. If √(2)-1<γ<√(2)+1, there are no real solutions for Formula (<ref>). Now, let us restrict ourselves to the principal branch of the pdf and check which of the found values for γ are within the range of values of the principal branch, i.e. W(γ z)>-1. Formula (<ref>) then leads us to the equality -1-γ^2±√(γ^4-6γ^2+1)/2>-1 ⇔γ^2-1<±√(γ^4-6γ^2+1), where the solution for positive values for γ is 0<γ≤√(2)-1. Thus, the function has two extrema in the interval 0<γ≤√(2)-1 and is monotone decreasing for γ>√(2)-1. The lemma is proved. To prove this result, we can use the reasoning given in the previous proof until Formula (<ref>), as this holds for both principal and non-principal branches. It is now sufficient to check which solutions also comply with the restriction W(γ z)<-1. Solving the equality -1-γ^2±√(γ^4-6γ^2+1)/2<-1 ⇔γ^2-1>±√(γ^4-6γ^2+1) for positive values of γ gives us γ≥√(2)+1. Thus, the function has two extrema if γ∈ (√(2)+1,∞), and we have proved part b) of the lemma. Similarly, the Equation (<ref>) has no real solutions for γ<√(2)+1. Now, the assertion that lim_z↑ 0 f_-1(z)=0 follows from the construction, see Formula (<ref>), which proves part a) of the lemma. The lemma is proved. § DETAILS OF ESTIMATION We use several R <cit.> packages for parameter estimation. In the case of hyperbolic, generalized hyperbolic, variance gamma and normal inverse Gaussian distributions, we use a routine from the package ghyp <cit.>; for skew-normal and skew t distributions, the sn package <cit.> and all other cases, the fitdistrplus package <cit.>. Also, we apply some functions from the package LambertW <cit.> to produce the pdf and cdf for Lambert W normal distribution. To access the US indemnity data, we use the R package fExtremes <cit.> and the package copula <cit.> for Danish fire loss data. We exploit default starting values from the relevant package’s MLE routines, except for the Lambert W distributions. In that case, we apply the method of moments to get the starting point for MLE. Next, we give a more detailed overview of the selection of starting values. Lambert W normal distribution. To derive the starting values for the Lambert W× N(μ ,σ) distribution we use the mean, variance and skewness coefficient of the Lambert W× N(μ ,σ) random variable Y given with Formulas (<ref>), (<ref>), and (<ref>). We first equate (<ref>) with the sample skewness coefficient and solve it numerically to produce γ_0. Next, the expressions of mean and variance are used, first substituting γ_0 and the sample variance s_y^2 of Y to (<ref>), and then solving it for σ to obtain the starting value σ_0 = √(s_y^2/(e^γ_0^2(e^γ_0^2(1+4γ_0^2)-γ_0^2))). Lastly, substitute γ_0, σ_0 and sample mean y̅ to (<ref>) and solve for μ to get μ_0=y̅ - σ_0γ_0 e^γ_0^2/2. Lambert W exponential distribution. In the case of the Lambert W× Exp(λ) distribution, we use the formula for the skewness coefficient (<ref>) with the sample-based estimate γ̂_1 to get a starting value γ_0 for the skewness parameter. As skewness coefficient γ_1 is a non-monotone function of γ, see Figure <ref>, only solutions in the interval (-1, 1/3) are used, as the values γ < -1 will produce too drastic truncation. For rate parameter λ, we use the expression of the first moment and solve y̅ = 1/λ(1-γ_0)^2 to derive the formula λ_0 = 1/y̅(1-γ_0)^2. for starting value of λ. § DATA HISTOGRAMS WITH THREE BEST FITTING MODELS
http://arxiv.org/abs/2307.04444v1
20230710095313
Weak gravitational lensing by an ESTGB black hole in the presence of a plasma
[ "Qian Li", "Yu Zhang", "Zhi-Wen Lin", "Qi-Quan Li", "Qi Sun" ]
gr-qc
[ "gr-qc" ]
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. [email protected] (Corresponding author) Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. This paper is devoted to studying the weak-field gravitational lensing properties of a 4D ESTGB black hole, which is surrounded by the plasma medium. The effects of the magnetic charges and the three plasma distribution models in the deflection of light around a 4D ESTGB black hole are investigated in detail. We find that the uniform plasma leads to a larger deflection of light rays in comparison with the singular isothermal sphere (SIS), the non-singular isothermal sphere (NSIS) models. Moreover, the deflection angle increases slightly as the absolute value of the magnetic charge decreases. Finally, we analyze the total magnification of image due to weak gravitational lensing around the black hole. The result shows that the presence of a uniform plasma medium remarkably enhances the total magnification whereas the non-uniform plasma reduces the total magnification. Weak gravitational lensing by an ESTGB black hole in the presence of a plasma Qi Sun ============================================================================== Keywords: Black hole, Weak graviatational lensing, Plasma PACS numbers: 04.70.Dy, 04.50.Kd, 03.65.Xp § INTRODUCTION As one of Einstein's general relativity predictions, black holes are the most mysterious objects in the present universe. Because the light ray is unable to escape the event horizon, which is a one-way causal boundary, black holes are not visible objects, and their existence can only be proven indirectly. However, with the development of related astronomical technology, the EHT cooperation organization <cit.> published the shadow of a supermassive black hole in 2019. This may be another powerful evidence of the existence of black holes after LIGO-Vigro detected the gravitational wave signals generated by the merger of binary black holes <cit.>. In addition to the standard general relativity, many modified gravity theories are proposed due to fundamental general relativity may not hold in high- or low- curvature regimes, such as the extended scalar-tensor-Gauss-Bonnet (ESTGB) theory <cit.>. It is given through the coupling of the Gauss-Bonnet invariant with a scalar field owing to avoidance of Ostrogradski instability, which is a special and interesting extension. This modified theory is a natural modification of general relativity and extension of the standard scalar-tensor theory. The Doneva and Yazadjiev indicated that below a certain critical mass, the Schwarzschild spacetime becomes unstable in ESTGB gravity <cit.>. The ESTGB theory can explain the phenomenon of the present stage of cosmic acceleration in cosmology <cit.>. Shortly thereafter, Cañate and Perez Bergliaffa <cit.> proposed the first exact magnetic black hole solution based on the extended scalar-tensor-Gauss-Bonnet theory (ESTGB) with a special type of nonlinear electrodynamics. The ESTGB black hole solution is characterized by the Arnowitt-Deser-Misner (ADM) mass and magnetic charge. When m>0 and q<0, the black hole solution is similar to the Reissner-Nordström black hole solution. The gray-body factor and absorption cross section of the massless Dirac field for this black hole were studied in Ref.<cit.>. Ma et al. <cit.> investigated the quasinormal modes and absorption cross section of the massless scalar field for this black hole. Besides, the thermodynamical properties for this black hole under the generalized uncertainty principle (GUP) have been studied in Ref.<cit.>. Because the spacetime around compact massive objects is curved, one of the remarkable characteristics of general relativity is light deflection and the lens effect. The phenomenon of light deflection and lens effect is called gravitational lensing. One of the three well-known verification experiments for general relativity involves light deflection. Therefore, gravitational lensing is used as a special tool to verify whether the general relativity theories are correct and to probe properties of matter surrounding black hole. Besides, one can obtain some feature information of the gravitational object by the gravitational lensing. It is extremely important that the difference between different black hole lenses can be obtained by the gravitational lensing effect <cit.>. So gravitational lensing still is the very active research area in the weak and strong field limits. The weak deflection angle of Schwarzschild spacetime in vacuum can be expressed by in form α̂=2R_s/b where R_s =2M and b is the impact parameter. Virbhadra et al. studied the strong gravitational lensing in the context of Schwarzschild black hole <cit.>. The variation of the tangential, radial, and total magnification of the images with respect to the angular source position is investigated by simulating the supermassive black holes M87* as a Schwarzschild lens <cit.>. Sereno <cit.> obtained the time delay and deflection angle expressions of the Reissner-Nordström black holes under the weak field approximation. In addition, many attempts have been made on the weak deflection angle of the different modified gravity theories by using different methods <cit.>. Generally, the angle of deflection or the relevant optical scalar can be expressed in the form of derivatives of the different components of the black hole metric. In strong gravity field, the study of gravitational lensing is a trending topic. There have been a number of articles examining the gravitational lensing in the strong field <cit.>. On the other hand, it is believed that compact astrophysical objects are immersed in a complicated environment, such as plasma. In this paper, we only focus on the plasma environment. Plasma is a dispersive medium whose refractive index relies on the frequency of photons. The plasma around compact astrophysical objects affects the trajectories of the light ray since it may interact with electromagnetic waves. Synge <cit.> firstly proposed the self-consistent approach to the propagation of light rays in the gravitational field in the context of plasma medium. Forty years later, Perlick <cit.> proposed a different type of the method to obtain the integral expression of the deflection angle as the plasma surrounds the Schwarzschild and Kerr black holes. Later, Bisnovatyi-Kogan and Tsupko <cit.> found that the deflection angle relies on the photon frequency in the uniform dispersive medium. The phenomenon has qualitatively different from the vacuum environment. The authors <cit.> also considered the case that the gravitational object is surrounded by the inhomogeneities of plasma and obtained the expression for the deflection angle of the different plasma models. Schee <cit.> et al. studied the gravitational lensing about the regular black hole immersed in plasma. The weak deflection angle of the wormhole solution described by exponential metric was obtained in Ref.<cit.>. The influences of uniform plasma on the the shadow and weak deflection angle for a rotating and regular black hole in a non-minimally coupled Einstein-Yang-Mills (EYM) theory have been studied <cit.>. Zhang et at. <cit.> studied the influences of the plasma with the power-law distribution and logarithmic normal distribution on the shadow of the Kerr black hole. In addition, Atamurotov and his coworkers were devoted to studying the weak gravitational lensing effect in plasma for various kinds of spacetimes such as the Lorentzian wormhole spacetime <cit.>, Schwarzschild-MOG black hole <cit.>, 4D Einstein-Gauss-Bonnet gravity <cit.>, rotating Einstein-Born-Infeld black hole <cit.>. In this study, we focus on the exact expression of the deflection angle for the (3+1)-dimensional ESTGB black hole assuming that the black hole is immersed in a plasma medium. And as an application, we will study the magnification of image in the weak field. The structure of this paper is as follows. Section <ref> presents a brief review of the process of obtaining the deflection angle under the weak-field approximation and calculating the deflection angle for the 4-dimensional ESTGB black hole, which is surrounded by three different plasma density distributions. In Section <ref>, as a type of application, we study the magnification of image for three different plasma density distributions, i.e., uniform plasma, SIS and NSIS medium. Finally, we give our concluding remarks in Section <ref>. Throughout, our choice of a spacetime signature is {-,+,+,+} and natural units c = G = ħ = 1. Latin indices run from 1 to 3 as well as Greek denotes from 0 to 3. § WEAK-FIELD LENSING IN THE PRESENCE OF PLASMA In this section, we will study optical properties, namely, gravitational lensing which is in the context of a 4D ESTGB black hole encompassed by the plasma medium under the weak-field approximation. The 4D ESTGB gravity with an extra matter field, namely a model of non-linear electrodynamics (NLED), has the following action <cit.> S = ∫ d^4x √(-g){1/4π(1/4(R - 1/2∂_μϕ∂^μϕ + f(ϕ) R__GB^2 -2 U (ϕ))-ℒ_ matter) }. Here the first term is the Einstein-Hilbert Lagrangian density, which is defined by the Ricci scalar R, the kinetic term of the scalar field 1/2∂_μϕ∂^μϕ, the non-minimal coupling between the Gauss-Bonnet invariant R__GB^2 and scalar field f(ϕ), i.e., f(ϕ) R__GB^2, and the scalar field potential U (ϕ). The Lagrangian density ℒ_ matter denotes any matter field in the action. Concretely, the Gauss-Bonnet invariant satisfies the form R__GB^2=R_αβμν^αβμν - 4 R_αβR^αβ+R ^2. The function f(ϕ) and the scalar field potential U (ϕ) can be expressed as f=-ℓ^2σ/32{√(2σ)tan^^-1( √(2)/√(σ).06cm ϕ) +1/2ϕln[( 2β/σϕ^2+β)^2] - 2/ϕ}, 𝒰(ϕ)=2^^9/2/105ℓ^2σ^7/2[π/2-tan^^-1(√(2)/√(σ).06cm ϕ)]ϕ^5/4ℓ^2(3/10σ+5ϕ^2/7+7σϕ^4/24) ln[( 2β/σϕ^2+β)^2] -ϕ/3ℓ^2(16/35σ^3-8ϕ^2/105σ^2+31ϕ^4/70σ+11ϕ^6/28). The NLED Lagrangian term that reduces to Maxwell's electrodynamics in the weak field regime has the following form ℒ_𝒩ℒℰ𝒟=ℱ/8-s^^1/2( 1 +37/210σ_∗+2/525σ_∗)ℱ^^5/4 - σ_∗ s ℱ^^3/2/16 +𝒪(ℱ^^7/4), with the electromagnetic invariant ℱ=q^2/r^4. And the above parameters have the relations σ=σ_*, l=s=q, β=β_* and ϕ(r)=q/r. The metric describing the 4D ESTGB black hole can be written as ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dθ^2+r^2 sin^2θ dϕ^2, with f(r)=1-R_s/r-q^3/r^3, where R_s=2M, M is ADM mass and q is magnetic charge. Since the weak energy condition (WEC) should be satisfied by both the corresponding effective energy-momentum tensor and that of nonlinear electrodynamics, the value of q<0 is permitted. Without losing generality, we consider the case that is a non-extreme black hole. This means that the value of the magnetic charge is limited to this range -2^5/3/3 < q < 0 when M is set to 1. We know that photons will follow the null geodesics of the effective spacetime metric in the presence of NLED instead of the original spacetime metric. However, we need to state that the metric describing the 4D ESTGB-NLED spacetime is obtained in the weak field where the NLED reduces to Maxwell's theory (see Ref. <cit.> for more detail). Therefore, photons still follow the null geodesics of the original spacetime metric in the weak field. Now, a general approach <cit.> is introduced to derive the deflection angle in the uniform or non-uniform plasma. We have the metric coefficients under the weak field approximation, which are given by g_αβ=η_αβ + h_αβ, where η_αβ is the Minkowski metric, i.e., (-1,1,1,1), h_αβ is perturbation metric. Note that h_αβ≪ 1, h_αβ→ 0 where x^α→∞, g^αβ=η^αβ-h^αβ, h^αβ=h_αβ. The refractive index of the static inhomogeneous plasma that relies on the photon frequency ω(x^i) and space location x^α has the following form n^2=1-ω^2_e/ω^2(x^i),   ω^2_e=4π e^2 N(r)/m=K_e N(r), where ω_e is the electron plasma frequency, N(r) is the electron density in the inhomogeneous plasma, e and m denote the charge and mass of the electron, respectively. It is worth noting that when ω_e< ω the electromagnetic waves can propagate in the such plasma. That is to say, the plasma medium has a reflective medium effect when ω_e< ω where ω(∞)≡ω. Considering the effect of the plasma on the deflection angle in the weak field limit, we get the expression of deflection angle in the following form α̂_k=1/2∫_-∞^∞(h_33,k+ h_00,k/1-ω^2_e/ω^2-K_eN_,k/ω^2-ω_e^2)dz, for k=1,2. The deflection angle with the impact parameter b found in Ref.<cit.> for more detail, can be written as α̂_k=1/2∫_-∞^∞b/r×(dh_33/dr+1/1-ω^2_e/ω^2dh_00/dr-K_e/ω^2-ω_e^2dN/dr)dz. The location of the photon is presented by b and z under the axially symmetric case, and then the magnitude of the radius-vector is written as r=√(b^2+z^2) <cit.>. It is worth noting that the negative value of α̂_b indicates the bending of the photon trajectory towards the compact object, and the positive value indicates the opposite. In the weak gravitational field regime, we can rewrite the metric around the 4D ESTGB black hole as ds^2=ds_0^2+(R_s/r+q^3/r^3)(dt^2+dr^2), where ds^2_0 is the flat part of metric, and it has the following form ds^2_0=-dt^2+dr^2+r^2(dθ^2+sin^2θ dϕ^2). The components h_αβ can be expressed in the Cartesian frame as h_00=R_s/r+q^3/r^3, h_ik=h_00n_in_k, h_33=h_00cos^2χ, where cosχ=z/√(b^2+z^2) and r=√(b^2+z^2). By substituting Eq.(<ref>) into Eq.(<ref>), we have the concrete form of the deflection angle in the following expression <cit.> α̂_b=∫_-∞^∞b/2r(∂_r((R_s/r+q^3/r^3)cos^2χ) +∂_r(R_s/r+q^3/r^3)1/1-ω^2_e/ω^2-K_e/ω^2-ω^2_e∂_rN)dz. In what follows, we will calculate the integrals about the deflection angle considering the three specific plasma distributions, viz., uniform plasma, singular isothermal sphere (SIS), and non-singular isothermal sphere (NSIS) medium. §.§ Uniform plasma In the subsection, we will calculate the deflection angle using Eq.(<ref>) for the photon propagating in the 4D ESTGB spacetime surrounded by uniform plasma, which can be expressed as α̂_uni=α̂_uni1+α̂_uni2+α̂_uni3. The first term is the influence of the gravitational field of the ESTGB black hole α̂_uni1=∫_-∞^∞b/2r∂_r(R_s/r^3+q^3/r^5)z^2dz = -R_s/b-2q^3/3b^3. Note that when q=0 the spacetime will recover to the Schwarzschild spacetime, and we will obtain α̂_uni1=R_s/b. The second term includes the influence of the gravitational field and plasma medium, which can be written as α̂_uni2=∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)1/1-ω^2_e/ω^2dz =-(R_s/b+q^3/b^3)1/1-ω^2_e/ω^2. Because the last term is the influence of the inhomogeneity of plasma, we get ∂_rN=0 for uniform plasma. In the relevant literature about weak gravitational lensing, the deflection angle is usually defined as a positive one <cit.>. Thus, we have the following expression about the uniform plasma α̂_uni=R_s/b+2q^3/3b^3+(R_s/b+2q^3/b^3)1/1-ω^2_0/ω^2, where ω_0=ω_e(∞). In Fig.<ref>, we plot the deflection angle α̂_b with respect to the impact parameter b for different values of magnetic charge q at ω_0^2/ω^2=0.5, and plasma medium parameter at q=-0.5. The deflection angle diminishes with an increase in the impact parameter b. As can be seen from Fig.<ref>, when b≫ R_s, we can neglect the effect of the magnetic charge on the deflection angle. In addition, it is easy to see from Eq.(<ref>), the deflection angle is very small or even disappear when the impact parameter b is large. Fig.<ref> demonstrates the dependence of the deflection angle from the uniform plasma parameter and magnetic charge at b=3. We can see in the left figure that the deflection angle increases rapidly when ω_0^2/ω^2 increases to 1. As the absolute value of magnetic charge decreases, the deflection angle slightly increases. §.§ Singular isothermal sphere In the subsection, we consider the case of an SIS around the 4D ESTGB black hole. The SIS is primarily introduced in Refs.<cit.> and <cit.> to study the lens systems of the galaxies and clusters of galaxies. The density distribution of the SIS is written as ρ(r)=σ_v^2/2 π r^2, where v is the one-dimensional velocity dispersion. We can obtain the plasma concentration by making use of Eq.(<ref>) and the following relation N(r)=ρ(r)/κ m_p, in which κ is a coefficient which is related to the contribution of dark matter, called by 1D coefficient, and m is the mass of proton. The plasma frequency has the expression ω^2_e=K_eN(r)=K_eσ^2_v/2πκ m_pr^-2. Using Eq.(<ref>), we can calculate the deflection angle for an SIS. Due to the fact that the first term is the effect of the gravitational field, it has the same expression as Eq.(<ref>) α̂_sis1=α̂_uni1. For the other terms, we calculate the integrals and obtain the following results α̂_sis2 =∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)(1+ω^2_e/ω^2)dz =-((R_s/b+2q^3/b^3)+(2R_s/3π b+8q^3/5π b^3)ω^2_cR_s^2/ω^2 b^2), α̂_sis3=-K_eb/2ω^2∫_-∞^∞1/rdN(r)/drdz=ω_c^2R_s^2/2ω^2b^2, where ω_c^2 is defined as <cit.> ω_c^2=K_eσ^2/2κ m_p R^2_s. We obtain the deflection angle about the SIS, which can be written as α̂_sis=((2R_s/b+8q^3/3b^3)+(-1/2+2R_s/3π b+8q^3/5π b^3)ω_c^2R_s^2/ω^2b^2). To simulate the effect of SIS on the trajectory of light, we demonstrate the deflection angle α̂ versus the impact parameter b for different values of magnetic charge when ω_c^2/ω^2 is set to 0.5, and the SIS parameter for fixed q=-0.5 in Fig.<ref>. It's not hard to get that when we increase the impact parameter the deflection angle decreases. Fig.<ref> is the visualization of deflection angle to SIS parameter and magnetic charge, respectively. It is straightforward to show that the deflection angle diminishes when ω_c^2/ω^2 increases (left figure), however, when the absolute value of magnetic charge decreases the deflection angle increases (right figure). This means that the existence of a SIS around the black hole reduces the deflection angle in comparison to the vacuum or uniform cases. §.§ Non-singular isothermal sphere In the subsection, we aim to give the exact expression of the deflection angle of the ESTGB black hole in the presence of the NSIS. The plasma distribution can be expressed as <cit.> ρ(r)=σ^2_v/2π(r^2+r_c^2), where r_c is the core radius, and the concentration becomes N(r)=σ^2/2πκ m_p(r^2+r_c^2). The corresponding plasma frequency has the following form ω_e^2=K_eσ^2_v/2πκ m_p(r^2+r_c^2). Similarly to the last subsection, the first term remains unchanged, and other terms of Eq.(<ref>) will have the expressions α̂_nsis2 =∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)(1+ω^2_e/ω^2)dz =-(R_s/b+q^3/b^3)-(R_s/bπ r_c^2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2)) ×ω_c^2R^2_s/ω^2-(-1/b^2 r_c^4 +2/3 b^4 r_c^2 +arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))×3 q^3 b R_s^2ω_c^2/ω^2π, α̂_nsis3=-K_eb/2ω^2∫_-∞^∞1/rdN(r)/drdz=b/2(b^2+r_c^2)^3/2ω_c^2R^2_s/ω^2, where ω_c^2=K_eσ^2_v/2κ m_p R^2_s. One can obtain the following form of the deflection angle by summing all the integrals α̂_nsis=(2R_s/b+8q^3/3b^3)+(R_s/bπ r_c^2-b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2))ω_c^2R^2_s/ω^2       +(-1/b^2 r_c^4 +2/3 b^4 r_c^2arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b R_s^2ω_c^2/ω^2π. The variation of the deflection angle α̂_b with the impact parameter b is shown in Fig.<ref>, where the ESTGB-NLED black hole is surrounded by NSIS medium. From Fig.<ref>, we can conclude that the increase of the impact parameter leads to the diminishing of deflection angle. And we can see from the right panel that the difference in the deflection angle becomes more and more obvious with an increase in the impact parameter for the different values of the NSIS medium. In Fig.<ref>, we plot the dependence of the deflection angle on the NSIS parameter for the different magnetic charges (left panel) and on the magnetic charge for the different NSIS parameters (right panel). In these two cases we fix b=3 and r_c=3. The effect of NSIS on the deflection angle is similar to that of the SIS case by comparing Figs.<ref> and <ref>. In the above three subsections, we studied the effect of the different distributions of the plasma and magnetic charge on the deflection angle in detail. To directly compare the effects of different plasmas, i.e., uniform plasma, SIS, and NSIS media, we study the dependence of the deflection angle on different parameters. The comparison results are shown in Fig.<ref> where we fix the corresponding parameters, viz., ω_0^2/ω^2=ω_c^2/ω^2=0.5, impact parameter b=3 and the core radius r_c=3. The uniform plasma medium exhibits better refraction properties than the SIS and NSIS models, as shown in Fig.<ref>. It is easy to see that the magnetic charge has a small effect on the deflection angle of the black holes in different plasma distributions. We also notice from the right figure that when we increase the plasma parameter ω_0^2/ω^2 or ω_c^2/ω^2, the deflection angle in the presence of the SIS or NSIS medium diminishes, whereas the deflection angle of the uniform plasma has the opposite trend. Finally, the deflection angle decreases with the increase of the impact parameter for the three models. In a word, the bending degree of deflection can be expressed mathematically as, α̂_uni > α̂_sis> α̂_nsis. § MAGNIFICATION OF IMAGE In this section, we will analyze in detail the magnification of image for the ESTGB black hole in the presence of the different plasma using the formula of the deflection angle studied in our previous section. The lens equation has the form <cit.> θ D_s=β D_s+α̂_b D_d s, where D_s is the distance from the observer to the distant light source, and D_d s is the distance from the lens object to the distant light source (see Fig.<ref>). θ denotes the angle of the apparent source image for the observer lens axis, β denotes the angle of the light source with respect to the observer lens axis, and α̂_b is the angle between the apparent source image and light source, i.e., deflection angle. We make use of the relationship between the impact parameter and angle θ, and θ possesses the expression b=D_dθ where D_d is the distance from the lens object to the observer, to rewrite the expression (<ref>), into the form <cit.> β =θ-D_ds/D_sF(θ)/D_d1/θ, and F(θ)=|α̂_b|b=|α̂_b(θ)|D_dθ. Note that when the light source, lens object, and observer remain in a straight line, the angle β is equal to zero. In such a case, the relativistic image will form a relativistic ring known as an Einstein ring. The radius of the Einstein ring R_0=D_dθ_0, where θ_0 denotes the Einstein angle. The Einstein angle in the context of the Schwarzschild black hole can be expressed as <cit.> θ_0=√(2R_sD_ds/D_dD_s). The Einstein angle θ_0 is small but can be solved with modern telescopes. However, we can detect the gravitational lensing owing to the changes in the apparent brightness of the source, namely magnification of the image brightness. The basic equation of the magnification of the image brightness is expressed as <cit.> μ_Σ=I_tot/I_*=∑_k|(θ_k/β)(dθ_k/dβ)|,  k=1,2,...,s, where I_tot and I_* refer to the total brightness of the image and unlensed brightness of the pure source, respectively. k is the number of the images and s is the total number of the images. Next, we will study the effect of the different distribution plasma around the ESTGB black hole on the magnification of the images. §.§ Uniform plasma We first calculate the expression of the Einstein angle θ^pl_0 in the context of the uniform plasma. We have the form by using Eqs.(<ref>) and (<ref>) as follows (θ^pl_0)_uni=θ_0{1/2((1+2q^3/3R_sb^2)+(1+2q^3/R_s b^2)1/1-ω_0^2/ω^2)}^1/2. We obtain the magnification of image by bring the above Eq.(<ref>) into Eq.(<ref>), which is given by <cit.> μ_tot^pl=μ_+^pl+μ_-^pl=x^2+2/x√(x^2+4). Here μ_+ is the magnification factor of the primary image, which is located on the same side of the light source with respect to the lens object <cit.> μ_+=1/4[x/√(x^2+4)+√(x^2+4)/x+2], and μ_- is the magnification factor of the secondary image, which is situated on the opposite side μ_-=1/4[x/√(x^2+4)+√(x^2+4)/x-2], where x denotes the dimensionless parameter in the presence of the uniform plasma. It has the following form x_uni=β/(θ^pl_0)_uni=x_0{1/2((1+2q^3/3R_sb^2)+(1+2q^3/R_s b^2)1/1-ω_0^2/ω^2)}^-1/2, with x_0=β/θ_0. For a better understanding of the effect of the magnetic charge and plasma on the magnification of image, in Fig.<ref>, we plot the variation of the total magnification of image with the magnetic charge for the different values of the uniform plasma parameter (left figure) and the uniform plasma parameter for the different values of magnetic charge (right figure) for fixed R_s=2, b=3 and x_0=0.055. We can see that the total magnification exhibits a small increase as the absolute value of magnetic charge decreases and reaches a maximum when it returns to the Schwarzschild black hole. It is easy to see from the right panel that the total magnification increases exponentially with the increase of uniform plasma distribution. In other words, the existence of uniform plasma usually increases the magnification. Besides, we also plot the ratios μ_+^pl/μ_+ (lower curves) and μ_-^pl/μ_- (upper curves) of the magnification with the given parameters q=-0.5, b=3 and R_s=2 in Fig.<ref>, for more details about the effect of the plasma on the magnification. It is evident that when the value of the uniform plasma density distribution increases, the magnification ratio increases. The behavior of the magnification ratio of the image brightness corresponds to the fact that the deflection angle is increased by ω_0^2/ω^2. In addition, the magnification ratio of the secondary image μ_-^pl/μ_- becomes larger, while the magnification ratio of the primary image μ_+^pl/μ_+ tends to unity when x increases. §.§ Singular isothermal sphere We have calculated the deflection angle for the case that the 4D ESTGB black hole surrounded by the uniform plasma in the last subsection. So in the subsection, we consider the influence of the SIS on the total magnification and the magnification ratio of image brightness. The expression of the Einstein angle θ^pl_0 in the context of the SIS medium can be expressed as (θ^pl_0)_sis=θ_0{1/2((2+8q^3/3R_sb^2)+(-1/2+2R_s/3π b+8q^3/ 5π b^3) R_sω_c^2/b ω^2)}^1/2. Since the calculational part is similar, we have x in the presence of the SIS plasma medium, which has the following form x_sis=β/(θ^pl_0)_sis=x_0{1/2((2+8q^3/3R_sb^2)+(-1/2+2R_s/3π b+8q^3/ 5π b^3) R_sω_c^2/b ω^2)}^-1/2, where x_0=β/θ_0. Fig.<ref> shows the changes in the total magnification of image as the function of the magnetic charge (left figure) for the different parameter values of the SIS parameter, and the SIS parameter (right figure) for the different values of the magnetic charge where corresponding fixed parameters are b=3, x_0=0.055 and R_s=2. From Fig.<ref>, we can see that when we increase the SIS medium, the total magnification decreases gradually. Because the plasma density decreases with the radius (dN/dr<0), α̂_sis3 is negative which is opposite to the gravitational deflection (see Refs.<cit.> and <cit.>). If α̂_sis3 is positive, the total magnification of image as the function of ω_c^2/ω^2 has the opposite direction (see Refs.<cit.>). Fig.<ref> demonstrates the magnification ratio, i.e., the primary image μ_+^pl/μ_+ (lower curves) and the secondary image μ_-^pl/μ_- (upper curves) in the case we fix the parameters as q=-0.5, b=3 and R_s=2. Because the effect of the SIS medium, the behavior of the magnification ratio is opposite to that of the uniform plasma. §.§ Non-Singular isothermal sphere In this subsection, we focus on the total magnification and the magnification ratio of image brightness for the ESTGB black hole surrounded by the NSIS medium. The Einstein angle θ_0^pl can be written as (θ^pl_0)_nsis =θ_0{1/2((2+8q^3/3b^2R_s)+(R_s/bπ r_c^2-b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2)) ×ω_c^2R_s b/ω^2 +(-1/b^2 r_c^4 +2/3 b^4 r_c^2+arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b^2 R_sω_c^2/ω^2π)}^1/2. The dimensionless parameter x has the form x_nsis =β/(θ^pl_0)_nsis =x_0{1/2((2+8q^3/3b^2R_s)+(R_s/bπ r_c^2- b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2)) ×ω_c^2R_s b/ω^2+(-1/b^2 r_c^4 +2/3 b^4 r_c^2+arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b^2 R_sω_c^2/ω^2π)}^-1/2, where x_0=β/θ_0. In Fig.<ref>, we show the graph of the total magnification for the case that the black hole is surrounded by the NSIS medium. By analyzing the behavior shown in Fig.<ref>, one can see that the change is similar to the case of the singular isothermal sphere. The presence of a NSIS reduces the total amplification in comparison with vacuum circumstance, i.e., ω_c^2/ω^2=0. This is because α̂_nsis3 is negative. We also plot the changes of the magnification ratio of the primary and secondary images with fixed R_s=2, b=3, x_0=0.055 and r_c=3 in Fig.<ref>. It is observed that μ_-^pl/μ_- (upper curves) tends to unity as larger x. And the ratio μ_+^pl/μ_+ (lower curves) is less than 1. We compare the magnification ratio of image brightness of the Schwarzschild black hole and ESTGB black hole in the uniform plasma in Fig.<ref>. We see that at large x the ratio of the magnification μ_+^pl/μ_+ tends to unity for the Schwarzschild black hole and ESTGB black hole; the ratio of the magnification μ_-^pl/μ_- of the Schwarzschild black hole tends to a constant, 2.25. This is consistent with the results of Bisnovatyi-Kogan et al.<cit.>. In addition, the magnetic charge has slight influence on the magnification ratio of the image. To compare the effects of the different plasma models on magnification ratio of image brightness, in Fig.<ref> we plot the magnification ratio of the three plasma distributions, i.e., uniform, SIS and NSIS, with the same parameters q=-0.5, b=3, R_s=2, ω_0^2/ω^2=ω_c^2/ω^2=0.5 and r_c=3. We can obtain from Fig.<ref> that as a consequence of the non-uniform plasma distribution around the black hole, the magnification ratio of the non-uniform plasma is less than that of uniform plasma. This means that only when there is uniform plasma around the black hole, the observer in the distance will perceive a considerable magnification. § CONCLUSION AND DISCUSSION In the work, we discussed the weak gravitational lensing properties of a 4D ESTGB black hole immersed in different plasma distribution models. We studied in detail the effect of the different plasma distribution models, i.e., uniform, SIS and NSIS medium, and the magnetic charge on the deflection of light. We found that the deflection angle increases slightly with the decrease of the absolute values of the magnetic charge. That is, the black hole has the maximum deflection angle when it returns to the Schwarzschild black hole. We showed that the presence of uniform plasma leads to an increase in the deflection angle. However, due to the fact that α̂_sis3 (α̂_nsis3) caused by the plasma inhomogeneity is less than zero , the deflection angle of the non-uniform plasma medium slightly diminishes with the increase of the plasma parameter. Moreover, compared with the SIS model, we found that the deflection angle is more sensitive to parameters b and ω_c^2/ω^2 in the NSIS model. We investigated the total magnification of image due to the weak gravitational lensing effect around a plasma-surrounded black hole. We observed that the change of the total magnification is similar to that of the deflection angle. In other words, for the uniform plasma model, the magnification of image increases, while for SIS or NSIS model, the magnification of image decreases. This result is also indicated by the magnification ratio of the image source. Finally, according to the influence of three plasma models on the deflection angle and the magnification of image, we can qualitatively understand the uniform plasma as a concave lens, while the SIS and NSIS plasma models as a convex lens in the context of the refractive index n<1. § ACKNOWLEDGMENTS This work was supported partly by the National Natural Science Foundation of China (Grant No. 12065012), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (Grant No. YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006). 99 Akiyama2019 K. Akiyama et al. [Event Horizon Telescope], Astrophys. J. Lett. 875, L1 (2019). LIGOScientific:2016aoc B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 116, 061102 (2016). Doneva:2018rou D. D. Doneva, S. Kiorpelidi, P. G. Nedkova, E. Papantonopoulos and S. S. Yazadjiev, Phys. Rev. D 98, 104056 (2018). Doneva:2017bvd D. D. Doneva and S. S. Yazadjiev, Phys. Rev. Lett. 120, 131103 (2018). Heydari-Fard:2016nlj M. Heydari-Fard, H. Razmi and M. Yousefi, Int. J. Mod. Phys. D 26, 1750008 (2016). Canate:2020kla P. Cañate and S. E. Perez Bergliaffa, Phys. Rev. D 102, 104038 (2020). Li:2022jda Q. Li, C. Ma, Y. Zhang, Z. W. Lin and P. F. Duan, Chin. J. Phys. 77, 1269-1277 (2022). Ma:2022gzr C. Ma, Y. Zhang, Q. Li and Z. W. Lin, Commun. Theor. Phys. 74, 065402 (2022). Lin:2022eix Z. W. Lin, Y. Zhang, Q. Li, C. Ma and P. F. Duan, Int. J. Theor. Phys. 61, 199 (2022). Eiroa:2005ag E. F. Eiroa, Phys. Rev. D 73, 043002 (2006). Wei:2011bm S. W. Wei and Y. X. Liu, Phys. Rev. D 85, 064044 (2012). Virbhadra:1999nm K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 62, 084003 (2000). Virbhadra:2022iiy K. S. Virbhadra, Phys. Rev. D 106, 064038 (2022). Sereno:2003nd M. Sereno, Phys. Rev. D 69, 023002 (2004). Jusufi:2017vta K. Jusufi, A. Ovgün and A. Banerjee, Phys. Rev. D 96, 084036 (2017). Ovgun:2018oxk A. Övgün, Universe 5, 115 (2019). Li:2020wvn Z. Li, G. Zhang and A. Övgün, Phys. Rev. D 101, 124058 (2020). Fu:2021akc Q. M. Fu, L. Zhao and Y. X. Liu, Phys. Rev. D 104, 024033 (2021). Javed:2020pyz W. Javed, J. Abbas, Y. Kumaran and A. Övgün, Int. J. Geom. Meth. Mod. Phys. 18, 2150003 (2021). Javed:2021arr W. Javed, A. Hamza and A. Övgün, Universe 7, 385 (2021). Li:2021xhy Z. Li and J. Jia, Phys. Rev. D 104, 044061 (2021). Crisnejo:2019xtp G. Crisnejo, E. Gallo and J. R. Villanueva, Phys. Rev. D 100, 044006 (2019). Crisnejo:2019ril G. Crisnejo, E. Gallo and K. Jusufi, Phys. Rev. D 100, 104045 (2019). Jha:2021eww S. K. Jha, S. Aziz and A. Rahaman, Eur. Phys. J. C 82, 106 (2022). Virbhadra:2002ju K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 65, 103004 (2002). Rahvar:2018nhx S. Rahvar and J. W. Moffat, Mon. Not. Roy. Astron. Soc. 482, 4514-4518 (2019). Bozza:2010xqn V. Bozza, Gen. Rel. Grav. 42, 2269-2300 (2010). Virbhadra:2008ws K. S. Virbhadra, Phys. Rev. D 79, 083004 (2009). Chen:2013vja S. Chen and J. Jing, Class. Quant. Grav. 30, 175012 (2013). Ji:2013xua L. Ji, S. Chen and J. Jing, JHEP 03 (2014), 089 (2014). Chen:2015cpa S. Chen and J. Jing, JCAP 10, 002 (2015). Chen:2016hil S. Chen, S. Wang, Y. Huang, J. Jing and S. Wang, Phys. Rev. D 95, 104017 (2017). Zhang:2017vap R. Zhang, J. Jing and S. Chen, Phys. Rev. D 95, 064054 (2017). Abbas:2019olp G. Abbas, A. Mahmood and M. Zubair, Chin. Phys. C 44, 095105 (2020). Abbas:2021whh G. Abbas, A. Mahmood and M. Zubair, Phys. Dark Univ. 31, 100750 (2021). Hensh:2021nsv S. Hensh, J. Schee, A. Abdujabbarov and Z. Stuchlík, Eur. Phys. J. Plus 137, 242 (2022). Synge:1960ueh J.L. Synge, Relativity: the general theory (1960). Perlick2000 V. Perlick, Ray optics, Fermat's principle, and applications to general relativity (Springer Science & Business Media, 2000). Bisnovatyi-Kogan:2008qbk G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Grav. Cosmol. 15, 20-27 (2009). Bisnovatyi-Kogan:2010flt G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Mon. Not. Roy. Astron. Soc. 404, 1790-1800 (2010). Schee:2017hof J. Schee, Z. Stuchlík, B. Ahmedov, A. Abdujabbarov and B. Toshmatov, Int. J. Mod. Phys. D 26, 1741011 (2017). Turimov:2022iff B. Turimov, Y. Turaev, B. Ahmedov and Z. Stuchlík, Phys. Dark Univ. 35, 100946 (2022). Kala:2022uog S. Kala, H. Nandan and P. Sharma, Eur. Phys. J. Plus 137, 457 (2022). Zhang:2022osx Z. Zhang, H. Yan, M. Guo and B. Chen, Phys. Rev. D 107, 024027 (2023). Atamurotov:2021byp F. Atamurotov, S. Shaymatov and B. Ahmedov, Galaxies 9, 54 (2021). Atamurotov:2021qds F. Atamurotov, A. Abdujabbarov and J. Rayimbaev, Eur. Phys. J. C 81, 118 (2021). Babar:2021exh G. Z. Babar, F. Atamurotov and A. Z. Babar, Phys. Dark Univ. 32, 100798 (2021). Babar:2021nst G. Z. Babar, F. Atamurotov, S. Ul Islam and S. G. Ghosh, Phys. Rev. D 103, 084057 (2021). Hensh:2019ipu S. Hensh, A. Abdujabbarov, J. Schee and Z. Stuchlík, Eur. Phys. J. C 79, 533 (2019). Atamurotov:2021hoq F. Atamurotov, A. Abdujabbarov and W. B. Han, Phys. Rev. D 104, 084015 (2021). S1958 S. Chandrasekhar and S. Chandrasekhar, An introduction to the study of stellar structure (Courier Corporation, 1957). J1987 J. Binney and S. Tremaine, Galactic dynamics (Princeton university press, 2011). Morozova V. S. Morozova, B. J. Ahmedov and A. A. Tursunov, Astrophys. Space Sci. 346, 513-520 (2013). Bisnovatyi-Kogan:2015dxa G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Plasma Phys. Rep. 41, 562 (2015).
http://arxiv.org/abs/2307.05545v2
20230708232436
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
[ "Zhongliang Jiang", "Septimiu E. Salcudean", "Nassir Navab" ]
cs.RO
[ "cs.RO" ]
Z. Jiang et al. 1]Zhongliang Jiangcor1 [cor1]Corresponding author at: Technische Universität München, Fakultät für Informatik – I16, Boltzmannstr. 3, 85748 Garching bei München [email protected] 2]Septimiu E. Salcudean 1,3]Nassir Navab [1]Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany [2]Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada [3]Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA XX June 2021 xx Month 2021 xx Month 2021 xx Month 2021 Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the “language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging. Ultrasound imaging, robotic ultrasound, telesonography, medical robotics, orientation optimization, path planning, visual servoing, compliant control, robotic US, robot learning, reinforcement learning, learning from demonstrations § INTRODUCTION Today, medical imaging is one of the most crucial components of the entire healthcare industry, from wellness and screening to early diagnosis, treatment selection, and follow-up <cit.>. Compared to the other three most common medical imaging modalities used in the current clinical practice [i.e., Radiography (X-ray), Computerized tomography (CT), and Magnetic resonance imaging (MRI)], Ultrasound (US) imaging has the advantage of being noninvasive, low-cost, portable, and free of ionizing radiation <cit.>. These merits make it particularly suitable for some clinical needs, such as image-guided interventions <cit.> and obstetric applications <cit.>. In October 2021, 0.79 million US examinations were performed in England, whereas there were 0.52 million CT scans and 0.31 million MRI scans <cit.>. However, regarding traditional free-hand US examinations, substantial experience and visuo-tactile skills are required for achieving high-quality US images <cit.>. These factors limit the utilization of US in clinical applications requiring reliable biometric measurements or repeatable images for monitoring lesions. To obtain high-quality images, sonographers need to maintain the probe with proper pressure and adjust the probe orientation for optimal acoustic windows. To overcome intra- and inter-operator variations, the robotic US system (RUSS) has been gaining attention for two decades. To illustrate the increased interest about RUSS, the number of related peer-reviewed publications in each year and cumulative years are depicted in Fig. <ref>. For individual years, the number of publications has grown from 1,020 in the year 2001 to 15,500 in the year 2022. The accumulated number of publications exponentially increased to 125,110 from 2001 to 2022. This dramatic rise in interest can be attributed to three distinct communities: engineers, clinicians, and entrepreneurs <cit.>. The need from clinicians for high-quality images and efficient and easy-to-use RUSS stimulates the development of RUSS by engineers. Due to the considerable economic benefits, entrepreneurs are motivated to develop prototypes and market them [https://www.adechotech.com/],[https://en.mgi-tech.com/],[https://www.bkmedical.com/]. To assist in combating global pandemics (e.g., COVID-19 and Ebola), the demand for intelligent systems and robotics is boosted extensively in the fields of disease prevention, screening, diagnosis, treatment, home care, etc. <cit.>. RUSS has been investigated to remotely or autonomously perform US tests for early detection and diagnosis <cit.>. Deploying RUSS in hospitals enables the separation of patients and sonographers, hence lowering the risks of virus transmission between patients and medical staff. This paper is motivated by the desire to assist both robotic US technicians and clinicians. For roboticists, we provide a comprehensive summary of enabling technologies (i.e., compliant force control and path planning) that are commonly needed for a variety of applications. In addition to the enabling technologies, the advanced solutions developed by integrating additional techniques (e.g., surface registration, visual servoing, and image segmentation) are summarized to demonstrate the potential of RUSS for addressing real-world challenges (e.g., tissue motion and deformation). Using these techniques, clinicians and technicians can further consider how RUSS can assist them in addressing particular clinical needs by sensibly integrating the different techniques together. This will help to bridge the gap between medical and technology research. Prior to this survey, there were some reviews that summarized the development of RUSS <cit.>. Recently, Salcudean et al. discussed the roles robotics play in the acquisition of medical images, including US, endoscopy, X-ray, optical coherence tomography, and nuclear medicine <cit.>. Specific to RUSS, Von Haxthausen et al. provided a systematic summary of recent publications between 2016 and 2020 <cit.>. Li et al. focused on the development of autonomous RUSS <cit.>. These two surveys categorize literature based on the level of automation; in contrast, this article emphasizes the connection between the potential clinical applications and enabling techniques. In addition, some novel concepts of application-oriented techniques (e.g., motion-aware <cit.> and deformation-aware <cit.> characteristics) have not been discussed before. However, they are important to further pave the way for applying RUSS in real scenarios. Due to the fast development of artificial intelligence (AI), learning-based RUSS is emerging to automatically perform specific US examinations <cit.>. Li et al. also noted this trend and mentioned the AI-based RUSS as one of the future directions <cit.>. Nevertheless, learning-based RUSS solutions have not been systematically discussed yet. Therefore, a comprehensive survey article covering these new trends of RUSS will be helpful for roboticists to quickly and systematically learn the key knowledge of RUSS, as well as for clinicians to comprehend how the robot benefits their specific clinical needs. Regarding future development for RUSS, we discussed some open challenges and promising perspectives to inspire the research community and other stakeholders. § MATERIALS AND METHODS §.§ Searching Policy In order to provide an objective view of the development of robotic US imaging over the last two decades, we carried out an extensive search of RUSS on the Web of Science and google scholar. The search term was “(remote OR teleoperat*) AND (ultrasound OR US OR ultrasonography OR echography)", and “robot* AND (ultrasound OR US OR ultrasonography OR echography) AND (Imaging OR screening OR scan* OR acquisition* OR servoing)". To further narrow the most relevant and most impactful articles, the titles and abstracts were carefully reviewed to exclude the articles that were (a) not focusing on the medical domain, (b) not using robotic imaging adjustment or optimization, or (c) not employing traditional 2D/3D probes. This excludes papers using endocavitary probes <cit.> for cardiology and prostate applications. Finally, among similar articles, the most representative ones (the newest or most cited) were selected. §.§ Technological Developments in RUSS   Skilled sonographers are often in shortage, particularly in rural areas. To allow accurate adjustment of US acquisition parameters and address the unbalanced distribution of healthcare resources across nations and regions, teleoperated RUSS solutions have been developed over the past two decades (see Section <ref>). For such systems, the operations are fully carried out by experts via teleoperation techniques; thereby, remote experts take the responsibility of robotic acquisition. To improve the level of autonomy of RUSS, quite a large number of RUSS solutions have been proposed for different applications in the past decades. To review the key characteristics of autonomous RUSS, we first summarize the existing articles in terms of enabling technologies, namely three key acquisitions parameters: contact force (Section <ref>), probe orientation (Section <ref>), and scan path (Section <ref>). By precisely controlling these parameters, the accuracy and reproducibility of US imaging can be improved <cit.>. In addition, more advanced techniques need to be developed to tackle additional practical complications occurring in clinical routines, e.g., patient movement and probe pressure-induced deformation. In this article, we featured four advanced techniques: 1) motion-aware US imaging (Section <ref>), deformation-aware US imaging (Section <ref>), US visual servoing (Section <ref>), and elastography imaging (Section <ref>). Sonographers often need to search for standard examination planes for biometric measurement and diagnosis. It is a time-consuming and non-repeatable process, even for experienced sonographers, due to the noisy US images and tissue motion. Benefiting from the development of artificial intelligence, and in particular deep learning, the area of medical image processing has achieved phenomenal success <cit.>. Learning-based image processing techniques lead to accurate and robust understandings of US images, which further enables training RUSS to learn both manipulation skills and clinical knowledge directly from human sonographers. We summarize the most recent developments in learning-powered RUSS (Section <ref>), aiming to automatically search for specific anatomy or navigate a probe to visualize standard US planes. Finally, we discuss the open challenges and provide a few potential directions for future developments Section <ref>. The important components of robotic US and the organization structure of this article are depicted in Fig. <ref>. By incorporating additional techniques to fundamental enabling technologies, the level of technical complexity is increased from Section <ref> to Section <ref>. In this way, we would like to highlight our strategy to inspire the community to achieve the ultimate goal of developing an intelligent robotic sonographer that can collaborate with human sonographers to improve diagnostic and intraoperative imaging in real scenarios. § TELEOPERATION IN RUSS   Teleoperation allows operators to remotely carry out certain tasks. Due to the development of networks, multimedia, and communication technologies in the past decades, teleoperation has become one of the most mature techniques for reforming modern medical procedures <cit.>. The main characteristic of teleoperation is that the robot's motion is controlled by operators. This is important for obtaining regulatory approval. The most successful representative is da Vinci from Intuitive Surgical, which has become the clinical standard of minimally invasive surgery for a wide range of surgical procedures <cit.>. Regarding teleoperated RUSS, it has been seen as a solution for work-related musculoskeletal disorders of sonographers <cit.>. In addition, separating operators from patients reduces the risk of transmitting pandemics (e.g., Covid-19) <cit.>. This section summarizes the technical and clinical contributions of remote RUSS, respectively. §.§ Technical Developments   Teleoperated RUSS often consists of three individual components: 1) an expert console, 2) a patient-side manipulator (PSM) used to maneuver a US probe, and 3) a software control system mapping the movement made by experts to the PSM. The teleoperated RUSS allows sonographers to manually, unconstrainedly, and safely control the probe motion onto the patient via the PSM. Teleoperated systems are also utilized on-site because robotic systems can overcome human limits in manipulation and perception by adding dexterity and precision. A common example is da Vinci, which is often employed on-site <cit.>. §.§.§ Robotic Mechanism In 1999, Salcudean et al. designed a six degree of freedom (DOF) lightweight mechanism with limited force capability for teleoperated RUSS <cit.>. Due to the need for a large orientation workspace, a parallelogram linkage was employed to decouple the orientation and translation in their final design, achieving the control resolution of 0.1 mm for translation and 0.09^∘ for rotation. Similarly, Lessard et al. designed the PSM in parallel structure in order to have enough workspace <cit.>. Masuda et al. designed a 6-DOF mechanism consisting of gimbals, pantograph and slide mechanisms, which weighed 3.3 kg <cit.>. To guarantee the safety of patients, there are four sensors symmetrically deployed around the probe to monitor real-time force. In addition, a number of soft mechanisms were developed for force-sensitive applications, e.g., obstetric examinations, to strictly limit the maximum US probe pressure. Vilchis et al. proposed a cable-driven nonrigid remote robot <cit.>. This system has been used on 100 patients with abdominal aortic aneurysm (AAA) at a distance of 1125 km. Tsumura et al. designed a passive mechanism using springs for fetal examinations, which can prevent excessive contact force <cit.>. Besides, a portable and attachable robotic system has been designed by Ito et al. <cit.> [see Fig. <ref> (e)]. In the same direction, Vieyres et al. proposed a 4-DOF light mechanism with 3-DOF rotation and 1-DOF translation in probe centerline <cit.>. Then, they updated the design of the portable RUSS to allow all 6-DOF motions using serial mechanism <cit.>. The portable RUSS is easily used by paramedics, which makes it ideal for use in emergency medical circumstances. Nevertheless, owing to the need of the compact structure, portable RUSS typically have restricted working space. Since mechanical design is beyond the scope of this survey's primary focus on imaging acquisition, we refer readers to two comprehensive review articles with mechanical designs for RUSS <cit.>. To reduce the cost of RUSS, commercial robotic manipulators e.g., Universal Robot (University robot, Denmark) and Franka Emika Panda (Franka Emika GmbH, Germany) are often used as PSM <cit.> [see Fig. <ref> (b) and (c)]. It is noteworthy that another typical standard robotic arm KUKA LBR iiwa (KUKA Robotics GmbH, Germany), with integrated joint torque sensors, is also commonly employed as a PSM <cit.>. HIPPOCRATE is a representative of teleoperated RUSS developed using a serial industrial robotic arm <cit.>. §.§.§ Shared Autonomy in Teleoperated RUSS To fully take advantage of the stability and accuracy of robotic techniques, Abolmaesumi et al. proposed a shared autonomy strategy between an expert and an image servo <cit.>. The in-plane three DOFs were controlled by visual servoing to automatically center the carotid artery in cross-sectional images, while the other three DOFs were teleoperated by an expert. In this case, the image servo can provide pixel-by-pixel control accuracy and further mitigate the negative influence of human tremor. To keep the tissue of interest always visible in the image and give more flexibility to the expert, Li et al. and Krupa et al. shared all four (in-plane and out-of-plane) DOFs of a lightweight body-mounted mechanism between the visual servoing algorithm and a human operator via teleoperation <cit.>. The visual servoing technique has also been widely used in autonomous RUSS to estimate and compensate for the motion of internal organs <cit.>, visualize and track the object of interest <cit.>, and improve the image quality by optimizing the acoustic windows <cit.>, etc. Please refer to Section <ref> for more details. §.§.§ User Interface Masuda et al. employed two joysticks to remotely control the three-dimensional rotation and translation individually of the PSM <cit.>. Yet, this manner differs from how experts conduct conventional US examinations. To enhance the intuitiveness of the interaction, a dummy probe is frequently utilized to intuitively control PSM from the expert console <cit.>. A gyroscope was installed within the dummy probe so that it could track the motion of the expert <cit.>. To improve the accuracy of the motion estimation, some mature techniques, such as optical and electromagnetic tracking can be utilized. As the use of a dummy probe allows experts to conduct US examinations as usual, RUSS can reduce training time and increase examination efficiency. However, the lack of force feedback on expert side may hinder the clinical acceptance. To tackle this problem, Martinelli et al. employed a haptic control system that rendered contact force in three dimensions <cit.>. Conti et al. employed a commercial 6-DOF haptic device (Omega 6) to reflect the contact force in six dimensions <cit.> [see Fig. <ref> (a)]. Recently, Naceri et al. directly deployed two 7-DOF Franka Emika Panda <cit.>, one of which was used at expert console with force feedback, and the other one used at patient side to precisely reproduce the movements of the experts. Benefiting from the development of virtual reality (VR) techniques, a VR simulator was designed as a new type of interface for teleoperated RUSS <cit.> [see Fig. <ref> (f)]. Compared to traditional joysticks or other haptic devices, an immersive experience can be achieved using VR simulators, which could intuitively visualize the remote scenes in 3D. The initial evaluation of a VR simulator has been performed by 12 experienced sonographers and the results suggest that the immersive simulator could be used for teleoperated RUSS <cit.>. A deeper discussion about human-robotic interaction studies will be beyond the focus of this paper. To inspire further research incorporating novel human-machine interfaces to improve the efficiency, intuitiveness, and robustness of teleoperated RUSS, we refer readers to two comprehensive surveys on interface approaches <cit.>. Specific to medical applications, Abdelaal et al. provided a crucial review of interfaces that have been used or tested in vivo <cit.>. §.§ Clinical Feasibility Evaluation   Teleoperated RUSS can fully utilize the advanced knowledge of experts. Compared to autonomous RUSS, teleoperated RUSS is easier to be certified for clinical use due to the fact that all diagnostic decisions and scan trajectory are made by experts. To achieve this objective, clinical studies have been performed using different teleoperated RUSS for a number of examinations. Clinical evaluations of existing teleoperated RUSS solutions have been categorized according to their clinical applications as TABLE-<ref>. §.§.§ Abdominal Imaging The abdomen is often examined using US images, which is one of the primary focuses of teleoperated RUSS. To validate the feasibility and diagnostic accuracy of such systems, Arbeille et al. evaluated a preliminary version of a teleoperated RUSS for general abdominal imaging on 20 patients <cit.>. The expert was in a room at some distance (20-50 km) from the patient's site. The time delay between experts and the PSM was less than 0.1 s using ISDN (terrestrial) telephone lines and less than 0.5 s using satellite links. To evaluate the performance, the authors validated their approach on four different groups of organs. The results demonstrated that the expert could image the main views (longitudinal and transverse) of the liver, gallbladder, kidneys, aorta, pancreas, bladder, and uterus on the patient. Only the heart and spleen were not identified in two and four of the 20 cases, respectively. The experiments also showed that sonographers can master the teleoperated RUSS in less than 3 hours, while the examination time (27±7 min for three or four organs) was approximately 50% longer than the traditional US examination. In a following study, Arbeille et al. further compared the performance of robotized and conventional US examinations on 87 patients examined in the emergency department at the Tours University in France <cit.>. The results demonstrated that each organ (e.g., liver, gallbladder, pancreas, kidney) can be correctly imaged by a robotized system in between 91100% of cases compared with the conventional US examinations. In addition, the mean visualization score for the teleoperated RUSS was 87.4% for the abdomen, while there were no false diagnoses made in this study <cit.>. In another clinical evaluation, Adams et al. also assessed the feasibility of performing adult abdominal US examination using a remote RUSS on 18 patients in the University of Saskatchewan, Canada <cit.>. Telerobotic examinations were successful in 92% of the examinations on various abdominal organs (given the organs were sufficiently visualized on the conventional examination); five pathological findings were identified on both modalities, three and two findings were only identified using conventional and telerobotic system, respectively. Furthermore, they reported that all participating patients were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>. Martinelli et al. carried out a study on 58 patients with a focus on the aorta <cit.>. The examination results demonstrated that all aneurysm cases were correctly detected by both conventional scans and the teleoperated RUSS. Furthermore, the quantitative results show that the diameter of the patient's aorta can be accurately measured. The interobserver correlation coefficient was 0.98 and the difference in measurement was less than 4 mm in 96.3% cases. In addition, the examination duration (mean±SD) of the teleoperated system and traditional examinations are 17±8 min and 12±7 min, respectively. Finally, they also reported that the acceptability of patients was 84±18%, which is similar to the result in <cit.>. §.§.§ Cardiovascular Imaging Compared with general abdominal organs, cardiac examinations are considered more technically demanding procedures. Regarding echocardiography, the clinical needs include the visualization and evaluation of the four cardiac chambers, measurements of aortic flow, and the identification of mitral, tricuspid, or aortic valve leaks or aortic stenosis <cit.>. To successfully perform tele-echocardiography, the probe was held by a 3-DOF robotic arm providing three orthogonal rotations, and then, the robotic arm was fixed to a motorized plate for obtaining translational movements <cit.>. The results on 41 cardiac patients demonstrated that similar measurements can be achieved in most cases (93%100%). Among the 71 valve leaks or aortic stenosis patients, 61 (86%) were successfully detected using tele-echocardiography and there was no false-positive diagnosis reported. Boman et al. also carried out a similar study on cardiovascular examination in Sweden <cit.>. The evaluations were carried out in three different stages. In stage 1, there were 27 patients in a different place than sonographers with a distance of 80 km. Regarding the other two stages, a total of 31 subjects were recruited in a place at 135 km from the experts. The results indicate that real-time echocardiographic examinations are possible <cit.>. Boman et al. compared the tele-echocardiography examination with the standard of care referral approach in terms of time and diagnosis <cit.>. 19 patients were randomized to remote consultation and imaging, and 19 to the standard of care consultation. The results demonstrated that the processing time was significantly reduced in the remote one (only 26.5 days vs 114 days for the standard one). Therefore, compared with the standard of care approach, patients were more satisfied with the remote consultation strategy, which offered an increased rapidity of diagnosis and the likelihood of receiving faster patient management <cit.>. In 2007, Sekar et al. evaluated tele-echocardiography examination in the diagnosis of congenital heart diseases in pediatric populations <cit.>. In this 3-year study, 102 pediatric telecardiology examinations were performed between a tertiary care cardiac center and a remote rural hospital located 193 km away. Pathology was ruled out in 50 children by tele-echocardiography. In addition, heart lesions were identified in 52 children and 30 among them required surgery. By using teleoperation techniques, the total cost for such remote care can be controlled under 90 USD, which becomes considerable for most developing areas <cit.>. Sengupta et al. further validate the feasibility of long-distance (trans-Atlantic) telerobotic US scans for vascular examinations <cit.>. The results showed that the procedure to localize the remote probe along the short axis of the carotid artery took less than 60 s and an examination could successfully be conducted in 4 min. Avgousti et al. employed 4G wireless networks in order to reduce the time delay for live tele-echography <cit.>. However, it is also important to note that the communication stability and potential signal interference may lead to uncertainty. §.§.§ Obstetric Imaging Obstetric imaging is also one of the most frequent applications of US examination in clinical practice. From the beginning phase to the birth of infants, more than five fetal examinations are carried out and such examinations are important to evaluate the health of both fetuses and pregnant women <cit.>. To assess the feasibility of teleoperating fetal US examinations in pregnant women, Arbeille et al. carried out a study on 29 pregnant women in an isolated hospital 1700 km away using both conventional and teleoperation examinations <cit.>. The results demonstrated that the biometric parameters, placental location, and amniotic fluid volume can be correctly measured in most cases (93.1%) using a teleoperated RUSS. Only in two cases, femur length could not be correctly measured. The mean duration of US examination of the remote examinations (18 min) was longer than that of conventional examinations (14 min). Another study with a similar objective was presented by Adams et al. on 30 patients in Canada <cit.>. In this study, the results indicated that there was no statistically significant difference between teleoperated RUSS and conventional measurements of overhead circumference, biparietal diameter, or single deepest vertical pocket of amniotic fluid; however, there were slight differences in the measures of abdominal circumference and femur length. Besides, 80% of the fetal structures could be sufficiently acquired by the telerobotic system (range, 57%–100% for each patient). Finally, a survey of participants shows that 92% patients are willing to have another telerobotic examination in the future. The aforementioned studies demonstrated the feasibility of using teleoperation to remotely carry out fetal US examinations while keeping comparable biometric measurements as precise as the conventional approach. §.§.§ General Applications Georgescu et al. reported the usability of a teleoperation system for general applications over one year <cit.>. In total 300 patients were involved: 138 supra-aortic vessels, 68 abdomen, 33 thyroid, 30 lower limb vein, 20 pelvis, 7 kidneys, 3 small parts, and 1 obstetrics. The reported average duration of a teleoperation examination was 24±5 min over all 300 examinations. In addition, the results showed that the use of teleoperation in the general medicine practice significantly reduced the waiting time (save several days) for patients, and similar information as conventional US examinations was achieved. It also contributed to saving costs for the healthcare system and facilitating earlier treatment of conditions, potentially leading to improved patient outcomes and less time in care facilities <cit.>. Most recently, a teleoperated RUSS was tested on 22 Covid-19 patients, and they concluded that teleoperated RUSS can be used to diagnose common abdominal, vascular, and superficial organ pathologies with acceptable accuracy <cit.>. § ENABLING TECHNOLOGIES FOR AUTONOMOUS RUSS   Recently, interest in autonomous RUSS has increased relatively to teleoperated RUSS. Autonomous RUSS has the potential to achieve standardized and reproducible US acquisitions. RUSS solutions further release sonographers from burdensome manipulation tasks and allow them to focus on diagnosis, requiring deep anatomical and physiological knowledge. The move of the research community toward autonomous RUSS has also proposed novel scientific questions, which defined important and exciting challenges. To develop autonomous RUSS, we first need to understand how human sonographers perform US scans. In this paper, we call this process the recovery of the “language of sonography". The community has not investigated this consciously, but this path can be traced throughout the analysis of the state of the art. The adjustment of contact force, probe position and orientation for optimal image acquisition has often been the first focus. Then, it is also crucial to plan an appropriate path for covering the area of interest and to compensate for the potential motion and deformation of the target anatomy during imaging. These points will be discussed explicitly in the following sections in more detail when we review some of the most relevant states of the art. In this section, three fundamental techniques used in RUSS are elaborated: 1) compliant control used to apply and maintain a given contact force between US probe and patients, 2) orientation optimization to determine the appropriate probe orientation for a given scan (often orthogonal to the contacted surface) and 3) path planning to best localize and visualize the anatomy of interest. §.§ Force Control Approaches   Due to the inherited characteristic of US imaging, a certain contact force between a US probe and human tissues is required to optimize acoustic coupling, thereby achieving high-quality US images. It is challenging for human operators to maintain a constant force during US scans. The varying force will result in non-homogeneously deformed US images. Thus, a dedicated force controller is needed to maintain the contact force during scans. Furthermore, such a controller is also crucial for guaranteeing the safety of patients by preventing excessive force. Depending on the target tissues, the acceptable contact force is less than approximately 20 N <cit.>. In the meanwhile, a small force (less than 1.2 N) is commonly considered as not being in complete contact with the skin <cit.>. It is noteworthy that this subsection only summarized the force control approaches (both software and hardware-wise) that have been used for developing RUSS. A more general and comprehensive summary of force control can refer to <cit.>. §.§.§ Hybrid Force/Position Controller The traditional hybrid force/position control approaches are implemented in two decoupled subspaces taking position law and force control law, respectively, into account <cit.>. Both force and position differences between current values and desired values are fed into the robotic dynamic model to update the manipulator's motion. To apply a constant contact force between a probe and subjects, Gilbertson et al. implemented a hybrid position/force controller for a 1-DOF hand-held RUSS <cit.>. In this study, they simplified the contact model as two interfaces (human-machine and probe-patient) using a set of masses, springs, and dampers. Thereby, the contact force can be dynamically connected to the probe position and velocity by selecting proper interface parameters. A similar hybrid position/force method based on an external 6-DOF force/torque (F/T) sensor was designed for 6-DOF RUSS <cit.>. Their approaches can automatically switch between velocity and force control modes according to the contact condition (free or contact space). The External hybrid force/position control is also often used in RUSS. The external controller first updates the position based on the force; then, the positional error is controlled using an internal servo. Pierrot et al. used a PI controller to maintain the contact force and a PID controller to continually run the joint position servo loop for a 7-DOF robotic US system <cit.>. Similarly, Ma et al. used a PID controller to actively compute the variation of Cartesian position based on the force error; and then used a position controller (provided by the manufacturer) in the inner loop <cit.>.To limit the negative effect caused by potential force measurement errors, a low-pass filter, and a moving filter were used to smooth the measured force. The authors claimed that the implementation of such an external force controller is simpler and can be adapted for any kind of robot <cit.>. §.§.§ Compliant Controller Regarding the hybrid force/position controller, a position controller is employed either in a sub-space for the traditional ones or in the low-level servoing loop for the external ones. Since the environment is unknown in real scenarios, the position control may result in excessive force to move to the computed positions. To ensure the safety of patients, two compliant control methods (impedance controller and admittance controller) are often used. The dynamic model of compliant controller is described as Eq. (<ref>) <cit.>. F + F_ext = K_m e + Dė + Më where F is the applied force/torque in Cartesian space, e = (x_d - x_c) is the Cartesian position and orientation error between the current pose x_c and the target pose x_d, F_ext is the desired force/torque, K_m, D and M are the stiffness, damping and inertia matrices, respectively. Based on Eq. (<ref>), the compliant performance can be achieved in all directions by giving different K_m and D, which enables safe/soft interactions between RUSS and patients. Regarding Eq. (<ref>), there are two different interpretations, which are referring to impedance control and admittance control, respectively. For the former one, the pose error is seen as feedback and the computed force and torque are applied to achieve the expected force F_ext. On the other hand, for an admittance controller, the force applied at the end-effector F is measured as input, while the output is the Cartesian movement. Since admittance control only requires the measurement of external force/torque, it is often used for low-cost robots without accurate joint torque sensors, e.g., universal Robots <cit.>. On the contrary, impedance control is more often used when robotic manipulators are equipped with accurate joint torque sensors, e.g., KUKA LBR iiwa <cit.> and Franka Emika Panda <cit.>. When the stiffness of the environment diminishes, the performance of impedance control will decrease due to friction and unmodeled dynamics, while the performance of admittance control will increase <cit.>. Therefore, admittance control could achieve better performance on soft tissues, while impedance control could be more suitable for stiff tissues. §.§.§ Spring-based Mechanism Since some clinical applications, e.g., fetal examination, are really sensitive to the applied force during US examinations, Tsumura et al. proposed a spring-based mechanism to maintain the contact force and passively adjust the probe pose with respect to the constrained surface <cit.>. Compared to the aforementioned sensor-based controllers, the passive mechanism can apply a constant force quickly and safely, especially in unstructured environments. Wang et al. proposed a spring-loaded ball clutch to limit the maximum contact force <cit.>. In normal cases, the detent structure is in its engaged position with ball restricted by a preloaded compressed spring. Once excessive force occurs, the ball comes out from the detent hole. Thus, the involved clutch joint will lose the function of transmitting torque <cit.>. In these ways, the maximum contact force of such mechanisms can be mechanically limited to 10 N <cit.> and 21.98±0.96 N <cit.>. Yet, this approach cannot precisely and dynamically control the contact force. To address this challenge, Housden et al. extended their work <cit.> by integrating a customized multi-axis F/T sensor to allow active adjustment of contact force <cit.>. The designed F/T sensor consists of two pieces with eight legs in total and the displacements of the legs were measured with eight optoelectronic sensors. By using the measured force as feedback, this system can actively adjust the contact force toward the desired values <cit.>. Bao et al. designed a parallel, motor-spring-based end-effector to actively generate a certain force for US scanning <cit.>. The force is adjusted by changing the position of two sliders connecting a moving platform using springs. The symmetrical configuration restricted the contact force consistent with the probe's centerline. §.§.§ Others Huang et al. attached two thin force sensors (IMS-Y-Z03, I-Motion Inc., China) on both sides of the front face of a linear probe <cit.>. Then, a simple rule was implemented to control the applied force: the probe moves downward 3.1 mm when the force is smaller than 1 N, the probe moves upward 3.1 mm when the force is larger than 8 N, and scans were only performed when both sensors measurements are in the range of [1, 8 N]. Their team extended this work by replacing a 3-DOF linear stage with a 6-DOF robotic arm <cit.>. A robotic arm enables in-plane rotation; thereby, an updated rule was used to maintain the constant force: the probe moves downward 0.2 mm when both the forces are smaller than the desired force, the probe moves upward 0.2 mm when the forces are larger than the desired one, the probe rotates 0.2^∘ (in-plane) when the two forces are different. Compared with other force adjustment approaches, this method is easy to be implemented, while the handcraft rule needs further improvement to adapt to inter-patient variations. §.§ Probe Orientation Optimization   The relative probe orientation with respect to the restricted surface is also a key factor dominating the image quality. For some applications like US imaging of bone, US probe orientation is often optimized to be orthogonal to the constraint surface <cit.>. In certain applications, such as image-guided interventions, the US probe may need to be tilted from the orthogonal direction in order to better visualize the targets and/or inserted instruments <cit.>. The articles discussing probe orientation adjustment are summarised in three subcategories: in-plane orientation, out-of-plane orientation, and full orientation optimization in this section. §.§.§ In-Plane Optimization The in-plane orientation of a 2D probe represents the rotation around the short axis of the probe (see Fig. <ref>). In other words, in-plane motion only happens in the plane of US view. In <cit.>, the in-plane rotation was optimized using the visual servoing technique to improve the general image quality. To quantitatively assess the image's quality and further use it as the input signal for servoing control, the US confidence map <cit.> was computed for individual images. The US confidence map provides a pixel-wise measure of signal loss based on a simplified model of wave propagation in tissues. The computed confidence map is often used as a measurement metric of image's quality <cit.>. However, it is worth noting that the quality here refers only to the strength of US signal. The best US images according to the confidence map may not be the best images expected by clinicians in examinations. To obtain the US images leading to higher overall confidence values, the probe's orientation was often optimized to the orthogonal direction of the surface <cit.>. In addition, Jiang et al. and Welleweerd et al. also employed US confidence map-based in-plane adjustments to improve sub-optimal contact conditions for limb arm and breast scans <cit.>, respectively. Huang et al. adjusted in-plane orientation to balance the contact forces measured at two endpoints on the probe tip <cit.>. Zettinig et al. proposed a 3D-to-3D volume registration to adapt the movement of target anatomy; then they further optimized the in-plane orientation to align the current needle guideline with the planned path on a preoperative CT or MR <cit.>. §.§.§ Out-of-Plane Optimization The out-of-plane motion is defined as the rotation around the probe's axial direction (see Fig. <ref>). In <cit.>, authors claimed that in-plane adjustment only benefit axial aortic scans marginally; therefore, they optimized out-of-plane rotation to improve the imaging quality in terms of overall US confidence values <cit.>. A fixed rotation angle interval was applied step by step. However, it is uncommon for existing articles to only optimize the out-of-plane orientation. §.§.§ Full Orientation Optimization To estimate the normal direction of a constrained surface, depth camera-based approaches are most often used in the existing literature <cit.>. The advantage of these approaches is high computational efficiency, while the main limitation is relatively low accuracy of the estimations. Recently, Ma et al. designed a probe holder with four laser distance sensors to actively adjust the probe's orientation to be normal to the surface <cit.>. The results demonstrated their adjustment can be computed in real-time. In addition, Jiang et al proposed a method to identify the normal direction of the restricted surface using contact force for out-of-plane optimization and US images for in-plane optimization <cit.> (see Fig. <ref>). The bone boundary was used to demonstrate the probe orientation's impact on the imaging quality. In this study, Jiang et al proposed a feature called the smooth derivative of contact force, which enabled the accurate estimation of the out-of-plane orientation without the requirement for an expensive external F/T sensor <cit.>. To further improve the accuracy of the estimated normal direction, Jiang et al. deduced the underlying mechanical model based on the force measured during two orthogonal fan motions at a given contact point <cit.>. The upgraded method works for both convex and linear probes, and due to its purely force-based nature, it is invariant to image noises. Yet, due to nonnegligible deformations of the soft tissue (e.g., breast), the force-based approaches are more suitable for orthopedic applications (e.g., limbs and back). Besides, a number of studies optimized the probe's full orientation solely using US images. Welleweerd et al. proposed a framework for automatic breast scanning without requiring patient-specific models <cit.>. To achieve this, in-plane optimization was firstly carried out to ensure acoustic coupling between the probe and the examined breast. Once the mean confidence value <cit.> of the resulting image is inside the given range, the probe will be moved tangentially to the breast. If the current mean confidence value differs from the specified range, out-of-plane corrections will be carried out to maintain constant confidence. The mean error between the estimated normal directions and ground truth at all points of trajectory was 12.6^∘ out-of-plane and 4.3^∘ in-plane <cit.>. Chatelain et al. extended their preliminary work <cit.> from in-plane control of a 2D probe to full-orientation control of a 3D wobbler probe using the confidence map <cit.>. Recently, Osburg et al. used Convolutional Neural Network (CNN) to compute the surface normal at the point of contact based on native 3D volumetric data <cit.>. Instead of identifying the normal direction of constraint surfaces, Jiang et al. estimated the normal direction of a subcutaneous tubular structure directly based on the segmented vessels of the most recent images <cit.>. The vascular boundaries obtained at different positions contain the local geometrical information (radius and centerline) of the blood vessel; thus, the US probe can be oriented orthogonally to the estimated centerline of the local segment of the tubular structure. §.§ Path Generation for Autonomous US Scanning   In order to accomplish US examinations, a proper path is essential to visualize the object or locate the lesion on human tissue, e.g., along a target blood vessel and covering a volume of interest. This section categorizes the existing path planning methods as 1) offline scan path generation methods and 2) online scan path generation methods. §.§.§ Offline Scan Path Generation To locate and evaluate the length and severity of stenosis for planning the treatment of peripheral arterial disease (PAD), Merouche et al. directly give the scanning path by manually moving the robotic arm along the target artery <cit.>. To address the potential visualization issue caused by small motions after path planning procedures and to facilitate the tracking of the artery during automatic scans, the probe's position was tuned to maintain the cross-sectional lumen horizontally centered in the US view. Similarly, Jiang et al. manually drew a scan path on the surface of a vascular phantom, and then extracted the path based on RGB images <cit.>. Considering autonomous path planning, scan trajectories can be determined on pre-scanned images (e.g., MRI and CT); then, transferring the planned path to the current setup by registering the live US or RGB-D image to the preoperative atlas. Hennersperger et al. validated the feasibility of autonomously transferring a planned scan path from MRI to the current setup based on the registration between the MRI and 3D surface point clouds acquired by a Kinect camera (Microsoft Corporation, USA) <cit.>. Similarly, Langsch et al. computed the scanning trajectory of an aorta by registering 3D US volume to the patient's MRI <cit.>. However, due to the need for tomographic data (MRI or CT) of each patient, the advantage of these approaches is reduced in clinical practice. To further address this challenge, Virga et al. carried out non-rigid registration between the patient-specific 3D surface extract from a depth camera and a generic preoperative MRI template <cit.> [see Fig. <ref> (a)]. Specific to thorax examinations, Jiang et al. presented a skeleton graph-based non-rigid registration between the cartilage point clouds extracted from a tomographic template and US images of patients <cit.>. To further improve the registration accuracy, Jiang et al. introduced the dense skeleton graph to replace the manually designed key points of the skeleton <cit.> [see Fig. <ref> (b)]. Akbari et al. presented a complete US-based approach to find a proper trajectory for breast US imaging <cit.>. A manual prior scan is carried out in advance; then, the desired trajectory for the post scan is computed based on geometrical analysis of the target using the pre-scanned US images. In addition, the scanning path is often planned solely on the surface extracted by an external camera directly <cit.>. Mustafa et al. extracted the patient's abdomen surface from an RGB image acquired using a web camera (2D) based on a preset HSV color filter; then, the position of the liver was estimated and a four-step acquisition protocol was applied <cit.>. Due to the lack of imaging depth information, the camera needed to be carefully configured anteriorly to subjects. Ma et al. used a Realsense SR305 RGB-D camera (Intel Corporation, USA) to extract the 3D surface data using a depth threshold and further planned the scanning path on the extracted 3D surface <cit.>. Huang et al. extracted 2D skin surfaces of patients from an RGB image using the rule “red>Green>Blue" <cit.> [see Fig. <ref> (c)]. They claimed this is more generic and robust than the threshold-based approaches. Then, a “snake" trajectory was automatically generated to cover the area of interest. Suligoj et al. used the same logic to generate scan paths over a region manually annotated in an RGB image <cit.> [see Fig. <ref> (d)]. Recently, Ma et al. proposed a learning-based method to extract the human abdomen from a depth camera, and further divided the extracted region into four parts for autonomously generating scanning paths of the lung <cit.>. The aforementioned path planning approaches for US scanning were directly determined on the patient's surface. However, the optimal coverage of an underlying volume of interest is not considered. To address this challenge, Graumann et al. proposed a method to automatically compute a suitable scanning path to cover a volume of interest easily selected in preoperative images <cit.>. Depending on the sizes of targeting volumes, one or multiple lines were automatically generated for full coverage. To automatically determine the optimal probe position on the skin to monitor the motion of the internal organ of interest, Bruder et al. computed patient-specific US image quality from a given CT scan <cit.>. To further consider the full coverage of subcostal organs like liver and heart, Göbl et al. proposed a framework integrating both geometrical and physics-based constraints to estimate the best US scanning path with respect to the limited acoustic windows <cit.>. The poses maximizing the image quality (i.e., less acoustic attenuation) are finally selected. The results on both human and phantom data demonstrated that superior image quality was achieved using their method in comparison with a naive planning approach while maintaining the necessary coverage of the target. §.§.§ Online Scan Path Generation Although the off-line path planning are more often used in RUSS, some online planning approaches based on live US images have also been developed. Online approaches can generate more flexible trajectories than offline approaches, which can effectively guarantee the target's visibility inside the US view, even in the presence of unexpected motion. In <cit.>, Jiang et al. proposed a pipeline to enable a RUSS to automatically perform US screening of tubular structures based only on real-time US image feedback. The US probe was manually positioned on the tubular structures [see Fig. <ref> (e)]. Afterward, a U-Net was activated to constantly segment cross-sectional vessel lumen from US images; and thereby, a set of boundary point clouds were extracted and further used to estimate the geometry (centerline and radius) of the local artery sections. To completely scan the whole artery, the US probe was moved forward in the direction of the estimated local vessel centerline in real-time. In addition, similar work was accomplished by Huang et al. for automatically screening of carotid artery based on the US image feedback <cit.>. In <cit.>, Kim et al. employed a CNN as a classifier for real-time B-mode images to update the probe position for heart examinations. Since the next action is planned in real-time, the online path planning approach can facilitate the robust tracking of the target during autonomous scans. To ensure the scanning quality to facilitate the clinical diagnosis, Jiang et al. first presented an online segmentation quality-aware method based on the Doppler signal <cit.>. Once the segmentation performance is considered low, the probe orientation will be adjusted to enhance the Doppler signal and thereby improve the accuracy and completeness of the reconstructed 3D vessel. The significance of this study lies in its ability to inspire future research into quality-aware, closed-loop robotic scanning. § APPLICATION-ORIENTED ADVANCED TECHNOLOGIES FOR AUTONOMOUS RUSS   The aforementioned three enabling technologies (force control, orientation optimization, and scanning path generation) have been extensively studied in the existing literature. However, the enabling technologies can only guarantee the quality of US acquisition in ideal cases. To further enable the implementation of extensive and autonomous RUSS screening programs, more advanced technologies tackling practical challenges in real scenarios should be considered. In this section, four distinctive techniques are discussed: 1) Motion-aware US imaging: regarding the autonomous scanning of the anatomy of interest, the potential body motion should be monitored and properly compensated to achieve accurate and complete 3D anatomy geometry. 2) Deformation-aware US imaging: due to the inherited characteristic of US imaging, a certain force is necessary for properly visualizing the underlying anatomy of interest; thereby, the inevitable force-induced deformation hinders the correct measurements of the target anatomy. 3) US visual servoing: by providing pixel-to-pixel control to accurately move the probe to reach the desired cross-sectional images and guarantee the visibility of the object of interest in US views. 4) Elastography imaging: benefiting from the accurate control over probe position and contact force between the probe and tested objects, the underlying tissue properties can be estimated for diagnosis using RUSS. §.§ Motion-Aware US Imaging   §.§.§ Periodic Motion Detection and Compensation In this context, periodic or quasiperiodic motions refer primarily to internal physiological motions such as respiration and pulsation. Because of the advantages of non-invasive and real-time performance, US can be used to monitor internal tissue motion <cit.>. In free-hand mode, it is extremely difficult to compensate for such motions to achieve stable US images. To tackle this challenge, RUSS has been seen as a promising solution <cit.> because robots usually can provide higher accuracy in terms of positioning and repeatability than humans <cit.>. Esteban et al. reported that RUSS can intrinsically compensate for small motions caused by breathing or human tremor using compliant force control <cit.>. Heunis et al. employed a 6-DOF Stewart platform to mimic the involuntary periodic movements that occur during scans; and further proposed a pipeline to create an effective scanning path to cover a surface while compensating for these motions and adhering to preset contact forces <cit.>. This movement was also compensated for by using force control. The results demonstrated that the reconstruction error of arteries was 1.9±0.3 mm in non-static scenarios. To actively compensate for the respiration-induced motion in the liver or prostate, Ipsen et al. applied a constant force control to accomplish continuous US scans in long-term monitoring <cit.>. Furthermore, visual servoing (Section <ref>) is another potential solution for compensating the respiration motion <cit.> and pulsation caused by heart beating <cit.>. §.§.§ Non-Periodic Motion Detection and Compensation Subjects are often adjusted by sonographers to better visualize the target during scans. Thus, the ability to compensate for non-periodic patient’s motion is crucial for the practical use of RUSS. A representative example of the influence caused by non-periodic motion of the imaged patients is shown in Fig. <ref>. The scanned results are significantly different when the same object is kept stationary and moved during scanning. To obtain complete and accurate 3D US scans of a vascular phantom in the presence of rigid motion, Jiang et al. proposed a vision-based RUSS to actively compensate for such non-periodic motion <cit.>. In this study, five passive markers were rigidly attached to the imaged phantom surface and further used to monitor the potential target motion. Once the target is moved, the motion-aware RUSS automatically computes the transformation and updates the trajectory to recover the scanning from the breaking point. To eliminate the requirement for careful configuration of the passive markers in real scenarios, Jiang et al monitored the patient's motion based on the real-time segmentation of objects in RGB images and computed the compensation matrix using extracted surface point clouds acquired before and after the motion <cit.>. The results on a realistic arm phantom demonstrate the effectiveness of this marker-less compensation method. The advantages of robotic US (accuracy and stability) and free-hand US (flexibility) were combined by including active compensation for potential patient motion during scans. However, such systems only considered the rigid motion of objects. To further tackle non-rigid articulated joint motions, Jiang et al. proposed a vision-based framework, combining joint detection and non-rigid surface registration, to automatically update scanning trajectories from a template to individual volunteers with varying arm gestures <cit.>. The robustness and accuracy of the proposed system have been evaluated on multiple volunteers. §.§ Deformation-Aware US Imaging   Due to the probe-patient contact force, shape distortion of the visualized anatomy's geometry is inevitable, particularly for soft tissues such as superficial blood vessels (see Fig. <ref>). The force-induced deformation reduces the precision and repeatability of US images, and thereby could further limiting the diagnostic accuracy and consistency, especially for computer-assisted diagnosis. To provide precise and reliable US images, pressure-induced image deformation needs to be properly corrected. Unlike human sonographers, robots/computers are not trained to make the diagnosis based on deformed images. Therefore, such corrections are particularly important for RUSS. To achieve distortion-free images, Treece et al. combined non-rigid image-based registration with position sensing to correct pressure-induced deformations for free-hand 3D imaging <cit.>. Sun et al. computed 2D deformation fields based on the estimated pixel displacements and corresponding contact forces using polynomial regression models <cit.>. The pixel displacements were computed based on flow techniques using raw echo frequency (RF) data. Based on their experimental results, the parabolic polynomial regression model significantly outperforms the linear model. However, there was no significant performance difference between 2nd order and higher-order polynomial models. Burcher et al. build a model using the finite element method (FEM) to predict the deformation <cit.>. Nonetheless, the performance of the FEM-based approach is heavily dependent on the prior knowledge of tissue properties, which are usually hard to measure in real scenarios. To overcome this challenge, Dahmani et al. employed a linear elastic model to approximate personalized biomedical properties of involved tissues from the images <cit.>. To alleviate the inter-variation of pressure-induced deformation between the acquired images along a scanning path, RUSS is often required to maintain a constant force during the screening. To correct distorted images, Virga et al. built a 4th-order polynomial model to regress the pixel displacement with respect to contact force and further propagate the computed deformation field at sparse sampling points to the whole sweep direction <cit.>. The sampling points were selected manually on the first frame and this method took 186 s on average to compute a deformation field at one location. To speed up the process for compression-free 3D volume, Jiang et al. proposed a stiffness-based deformation correction approach, incorporating image pixel displacements, contact forces, and nonlinear tissue stiffness <cit.>. To obtain patient-specific stiffness models, robotic palpation was performed at sampling positions. Since tissue stiffness is the key factor dominating the deformation, the optimal deformation regression models at sampling positions can be propagated to other positions on the trajectory by interpolating the estimated local stiffness. However, the state of the art in the field of US image correction for force-induced deformation is not yet applicable to clinical practice. To further achieve this objective, a pixel-wise tissue properties estimator and anatomy-aware correction system should be developed to bridge the gap between different anatomy and different patients. §.§ Ultrasound Visual Servoing   Understanding the interaction of sonographers with the patient and the US probe is of high importance when developing RUSS. In order to acquire B-mode images of the anatomy of interest, sonographers perform a rough positioning of the probe on the human body. Consecutively, the B-mode images are analyzed while adjusting the probe to obtain the final view with the anatomy of interest in focus. This dynamic image-based adjustment and exploring of the anatomy can be defined as “visual servoing". While this has been the subject of research in the last decades, we believe that the introduction of deep learning and the advances in reinforcement learning could allow the scientific community to further understand and solve this image-based optimization problem. Recent work that has been published in this field <cit.> can be taken as an indicator for being a potentially interesting research topic in the coming years. In this section, we review some prior work on visual servoing that can be considered as a development of the state of the art towards the goal of autonomous intelligent exploration of particular anatomy and physiology views needed for examination and treatment. §.§.§ Autonomous US Probe Guidance To automatically rediscover a previously registered US imaging view, Bachta et al. developed an image-based visual servoing approach using boundary information and tested it in a simulator <cit.>. The target edge was retrieved using a polynomial regression analysis, and the optimized coefficients were used as visual features to guide a robot-controlled probe to reach a desired image section. However, this method suffers from image noise and is limited to a specific shape. To overcome this challenge, Mebarki et al. employed image moments as visual features <cit.>, which are generic and robust with respect to measurement perturbations. To further achieve a model-free servoing task on unknown targets, they compute the interaction matrix in real-time using B-mode images <cit.>. The experiments on gelatin phantoms demonstrated promising results in terms of minimizing the visual-features error; however, only local convergence can be guaranteed. In particular, in the case of a roughly symmetric object, similar geometric properties can be observed from different cross-sectional images. To overcome this shortage, Nadeau et al. defined a set of 2D features based on a three-dimensional space using a motorized 3D probe <cit.>. To accurately and actively navigate the probe to a given US plane using the visual servoing technique, Duflot et al. first used the subsampled shearlet coefficients as novel visual features as an input to the controller, instead of pure image signal information, i.e., point, lines, moments, etc. <cit.>. Since a set of noiseless and redundant features can be extracted using shearlet coefficients, promising performances of their approach in terms of accuracy, repeatability, and robustness could be achieved. A comprehensive comparison between shearlet-based and photometric-based visual servoing controllers was carried out in both simulator and physical phantom <cit.>. §.§.§ Imaging Stabilization and Object Tracking Visual servoing has also been used to track anatomies of interest and perform online compensation of the anatomy’s motion to stabilize the real-time US images. Without compensating for some potential motion like breathing, the resulting images will be affected. This will lead to inaccuracies in the estimation of the precise location of intervention target tissues. US visual servoing technologies are developed to compute the corresponding probe adjustment against environment dynamics based on real-time image feedback. Nadeau et al. presented an intensity-based approach to maintain the view of an organ while compensating for the physiological motion of the patient <cit.>. Since the computation of image moments depends on object segmentation, image intensity values were directly used as visual features. In an extension work, they adapted their method for 3D probes and did first validations on soft animal tissues <cit.>. In 2015, Nadeau et al. applied a similar intensity-based visual servoing method to keep a target centered within a virtual imaging view in the context of intracardiac surgery <cit.>. Its effectiveness has been validated on in-vivo data. Besides cardiac applications, Nadeau et al. applied visual servoing to stabilize respiratory motion by compensating periodic disturbances with a predictive controller <cit.>. In addition to intensity-based approaches, Krupa et al. employed US speckle information to estimate both in-plane and out-of-plane motion, thereby, realizing the tracking of soft tissue movements in US view <cit.>. Speckle is often considered to be noise, however, it conveys valuable data on the tissue of interest. Speckle contains spatially coherent information between consecutive US images because it physically results from coherent reflections of small components in human tissue. The preliminary experiments performed on a phantom with 2-DOF in-plane and out-of-plane motions demonstrated the potential of a speckle-based servoing approach. The validation for 6-DOF motion was further reported in <cit.>. To further consider soft tissues' deformation, Royer et al. developed a physics-based model to facilitate the accurate tracking of the target of interest in 3D US images <cit.>. §.§.§ Imaging Quality Optimization Visual servoing techniques have also been investigated to improve imaging quality. Chatelain et al. first introduced the US confidence map as a new feature for visual servoing <cit.>. The authors claimed that the US imaging quality could be improved by optimizing the probe orientation to maximize the overall confidence value. An interesting extension using 3D probes instead of 2D probes has been reported in <cit.>. To evaluate the effect of the proposed method in real scenarios, in-vivo validations were performed on healthy volunteers. In addition, Patlan et al. directly employed elastography as the input of the visual servoing controller <cit.>. To optimize the quality of the resulting elastography, the probe was automatically actuated to image a soft tissue object from different views, and further fused to enhance the computed elastography. §.§ Elastography Imaging   US elastography is a non-invasive technique aiming to estimate the mechanical proprieties (i.e., stiffness) of the underlying soft tissues. Elastography has gained great interest in applications such as differentiating tumors from healthy tissues (breast, prostate, liver, etc.) and guiding radiofrequency ablation surgeries <cit.>. Based on the underlying principles for producing US elastography, the currently available techniques can be mainly grouped into shear wave imaging and mechanical strain imaging. In shear wave imaging, the propagation speed of shear wave is measured. In addition, for strain imaging, a mechanical compression is performed using a US probe on the object's skin, where the mechanical compression process can be accurately controlled and measured based on robotic techniques. Thereby, accurate and standardized elastography is expected to be achieved. Compared with shear wave imaging, strain images are more common for robotic elastography imaging because it doesn't require specialized US hardware. Schneider et al. computed laparoscopic US elastography using an external vibrator positioned on the patient skin, where the US probe was remotely controlled by da Vinci (see Fig. <ref>) <cit.>. Patlan-Rosales et al. computed strain images using real-time radio-frequency (RF) signals to precisely locate subcutaneous tumors <cit.>. In this study, robot-assisted palpation was used instead of an external vibrator and the resulting strain images were used to horizontally maintain the object in the imaging center. To estimate the strain map of moving tissues, Patlan-Rosales et al. estimated and compensated the non-rigid motion using visual servoing on an abdominal phantom <cit.>. Instead of 2D elastography, the same team extended their work to create 3D elastography based on the pre- and post-compressed volumes obtained by a 3D US probe <cit.>. To compute 3D elastography without using a 3D probe, Huang et al. designed a linear sliding track with a position sensor and a height-adjustable holder for conventional 2D probes <cit.>. In this study, the pre- and post-compression echo signals were recorded by manually adjusting the height of the probe holder. Then, paired frames of RF data from the pre- and post-compression sweeps were obtained by interpolation. 2D strain images were computed using the paired RF data; thereby, 3D strain maps were obtained by stacking the computed 2D strain images. To allow automatic acquisition of 3D strain maps, they replaced the linear track with a motorized 3-DOF linear stage <cit.> and a 6-DOF robotic arm <cit.>, respectively. § AI-POWERED ROBOTIC US ACQUISITION   AI techniques have been seen as a promising way to further improve the automation level of RUSS by enhancing the understanding of US images and enabling the intuitive transfer of senior sonographers' advanced physiological knowledge. Such techniques have gained increasing attention most recently. A diverse set of tasks like segmentation and classification of US images have achieved great success. Regarding the field of US image segmentation and classification, a large number of research articles have been published. More detailed techniques can be found in these survey articles <cit.>. In this article, we will only focus on the studies that aim to automatize and/or standardize US scanning using AI-based approaches. More specifically, the approaches tried to automatically search for specific anatomical features or navigate a probe to display standard US planes needed for examinations. These tasks are challenging because RUSS must be able to properly interpret the current states (US image, contact force, probe pose) and the surrounding context. Due to the potential tissue deformation and inconsistent acoustic artifacts of medical US images, guiding a probe to visualize target objects in desired planes is a highly sophisticated task, which requires years of training <cit.>. However, such knowledge is not yet available for robots or computers. Due to the great advantage in feature representation over naive handcrafted features, CNN has the potential to achieve superhuman performance to robustly and accurately locate standard planes on challenging US images. Chen et al. employed a deep CNN to identify the fetal abdominal standard plane from recorded US video <cit.>. Since data collection and manual labeling are time-consuming, a transfer learning strategy was used to guarantee the performance with limited training data. To achieve real-time performance, Baumgartner et al. proposed a deep CNN architecture called SonoNet to automatically detect 13 fetal standard planes as well as provide localization of the fetal structures using a bounding box <cit.>. The SonoNet was trained in a weakly supervised mode with only image-level scan plane labels, which make it possible to prepare a large data set. These approaches aid sonographers to locate standard planes that can also improve efficiency in particular for novices. Yet, these methods cannot automatically guide the probe towards target planes or anatomical structures of interest. To enable the ability of RUSS to automatically perform US scans, Mylonas et al. proposed a learning-based approach allowing autonomous execution of US scanning according to expert demonstrations <cit.>. To achieve this objective, a Gaussian Mixture Modeling (GMM) was employed to model the demonstrations (trajectories) towards target objects in a probabilistic manner. However, since the real-time US image was not taken into consideration, all the demonstrations roughly started from the same initial position. This limitation severely impairs the usability of this method in real scenarios. To overcome this limitation and further provide real-time probe movement guidance for obtaining standard planes, Droste et al. proposed a behavioral cloning framework to mimic the process of sonographers searching for standard planes <cit.>. The proposed US-GuideNet consists of two fully connected layers and a gated recurrent unit (GRU) used to extract the sequential information. Due to hardware limitations, the predicted next movement of the probe and the estimated final standard planes only accounted for the rotational component, while the translational component remained unaccounted for. The performance of the imitation-based approach heavily relies on the given demonstrations. However, human US demonstrations are frequently and inherently sub-optimal, where the sonographers often need to adjust the probe around the desired pose to finally determine the optimal view. To tackle sub-optimal demonstrations, Burke et al. introduced a probabilistic temporal ranking model which assumes that the images shown in the later stage are more important than the earlier images <cit.>. The probabilistic ranking model can generate a large data set consisting of pair-wise images based on limited demonstrations; and then, a reward inference network was trained to assess individual B-mode images in self-supervised mode. To automatically navigate the probe to the viewpoint visualizing the mimicked tumor inside the gel phantom, an exploratory Bayesian optimization policy was employed. Nonetheless, due to safety concerns, it is impractical to interact richly with patients to gain enough experience to achieve the optimal searching policy in real scenarios. The process of navigating a US probe to a proper viewpoint displaying standard planes can be seen as a series of probe motions performed in accordance with current observations (e.g., US images, force, probe pose). Therefore, the reinforcement learning (RL) architecture has been seen as a particularly suitable solution for this type of task. Milletari et al. presented an initial work using a deep Q-learning (DQN) architecture to guide sonographers towards the correct sonic window for cardiac examination <cit.>. To avoid dynamic interaction with patients, a grid world environment was built over the chest using recorded videos to simulate acquisition environment. The results demonstrated that the DQN-based approach achieved better results (86.1% correct guidance) than a supervised approach (77.8% correct guidance) trained on the same data. A similar work also trained a DQN on a simulated 2D grid environment to navigate the probe towards the sacrum <cit.>. To automatically terminate the navigation process, a binary classifier (ResNet18) was employed to determine if the target object had been reached. Since this method only considered 3-DOF translational movements, the probe orientation is necessary to be carefully initialized. To further eliminate the requirement of manual initialization and automatically localize the paramedian sagittal oblique plane (a standard plane used in spine US examination), Li et al. trained a DQN to predict the potential actions in 5-DOF spaces (besides the translation in the probe centerline) <cit.>. In contrast to the grid word environment, this work built a simulator using 3D US volumes that cover the target anatomy of interest. This simulator can generate synthetic US images based on arbitrary probe poses. The experimental results demonstrated that the method can repeatably navigate the probe to the target standard plane with an accuracy of 4.91 mm (translational) and 4.65^∘ (orientational) in the intra-patient setting. Then, the authors extended the work by adding a deep learning module (VGG-16) to recognize the target standard views from real-time US images <cit.>. Due to the US simulator, a large amount of state-action data can be obtained for training the DQN agent. In addition, to learn the policy to guide the probe to the position visualizing the kidney, Chen et al. used a supervised learning process to predict the next actions based on the current US image; and an actor-critic RL module was developed to improve the utilization of data and enhance the generalization <cit.>. Recently, to bridge the gap between simulation and real scenarios, Bi et al. proposed VesNet-RL to perform US standard plane (longitudinal view) searching for vascular structures <cit.>. To achieve high generalization capability, this study computed the binary mask of real-time B-mode images and used the background-irreverent binary masks as the input to train the RL agent. Instead of performing validation in the simulated environment with a virtual probe, Ning et al. proposed a state representation model to encode the force and US images into the scene image space acquired using an RGB camera; and then an agent was trained using the proximal policy optimization (PPO) method to control the robotic manipulator to automatically perform US scans in real world <cit.>. Similarly, Deng et al. employed a deep neural network to encapsulate the scanning skill (the US images, the pose/position of the probe, and the contact force) into a high-dimensional multi-modal model; then, a policy was trained based on expert demonstrations <cit.>. Due to the differences between the images in the given demonstrations and real ones obtained during dynamic interactions, the trained model was further improved with guided explorations carried out by human operators. However, such manual correction is very expensive during clinical examinations, and it will limit the efficiency of the RUSS. Instead of directly learning a policy to search for standard planes, Jiang et al. proposed a novel machine learning framework (MI-GPSR) to understand the implicit physiological knowledge from expert demonstrations, which is implemented in a fashion of self-supervised mode using a probability ranking approach <cit.>. To ensure the generalization capability of the method, the authors employed the mutual information <cit.> to explicitly disentangle the task-related features from the domain features. The results on three types of phantoms [gel tubular structure, chicken heart, and lamb kidney phantom (see Fig. <ref>)] demonstrated that MI-GPSR can properly predict the reward of individual US images from unseen demonstrations and unseen phantoms with the same anatomy <cit.>. Understanding and modeling the semantic reasoning and intention of expert sonographers can facilitate not only the development of autonomous intelligent RUSS but also the design of US education and training systems and advanced methods for grading and evaluating the performance of human and robotic sonography. § OPEN CHALLENGES AND FUTURE PERSPECTIVES   Medical robots have gained increased attention, in particular during the COVID-19 pandemic. The role of robotics in managing public health and infectious diseases has been widely discussed among the community <cit.>. In order to apply RUSS in clinical practice, there are still many open challenges, including both technological (e.g., deep understanding of the dynamic scene, and advanced sensing technologies) and nontechnological (e.g., regulatory affairs and financing) aspects <cit.>. Here, we highlight two aspects that will widely affect the roadmap for RUSS, particularly for clinical translation and commercialization: 1) the acceptance of RUSS, and 2) the ethical and legal issues. In addition, we discussed some promising research directions to inspire the future development of RUSS. §.§ Acceptance by Patients and Clinicians The RUSS are designed to help both sonographers and patients in clinical practice. Besides demonstrating comparable or even better outcomes, the acceptance for RUSS is also important. Here, we want to first make a distinction between the concepts of acceptance and trust. Trust is mostly based on how well RUSS performs in terms of technical performance, such as safety, clinical results, robustness, repeatability, and so on. Yet, effective communication, friendly interaction, and mental development would also be necessary for improving acceptance. Regarding teleoperated RUSS, Adams et al. indicated that all patients (18) were willing (89% were strongly willing and the remaining 11% were willing) to have another telerobotic examination <cit.>. A similar result was reported by <cit.>, where 97% of 28 patients were willing to have another teleoperation scan. However, the number of participating patients in these two studies is limited. A more comprehensive survey about the patients' acceptance of RUSS should be carried out in the future. Furthermore, it is noteworthy that the clinicians' attitudes toward RUSS are still missing. Teleoperation systems are controlled by human operators, and there are some very successful teleoperation surgical systems, e.g., da Vinci system. This fact contributes to the positive attitude of stakeholders for teleoperated RUSS <cit.>. In contrast, since autonomous RUSS are partially or fully out of the control of experts, non-negligible worries about safety arise, which stress both patients and experts during scans. Autonomous RUSS is still far from gaining widespread acceptance. A standard evaluation metric considering clinical practices will help improve the trustiness of emerging autonomous medical robotics <cit.>. Nagy et al. defined the concept of level of Clinical Realism: 1) Training tasks with rigid phantoms; 2) Surgical tasks with simple phantoms; 3) Surgical tasks with realistic phantoms, but little or no soft-tissue interaction; 4) Surgical tasks with soft-tissue interaction; 5) Surgical tasks with soft-tissue topology changes <cit.>. To tackle the safety concern of autonomous RUSS, robotic arms are often controlled in compliant force mode, which will result in soft interaction between the probe and patients to prevent excessive contact force <cit.>. A force threshold is specified as a hard limitation in the low-level controllers to completely eliminate the potential extreme situation. The RUSS will stop instantly whenever the real-time force exceeds the predetermined threshold, which was 25 N in <cit.>. During robotic scans, two emergency buttons are often held by the clinical expert and the patient, respectively, to incorporate their observations into the safety-aware loop. Such a dedicated multi-layer safety-aware framework is beneficial for increasing the trust of clinicians and patients. By offering detailed explanations of the ongoing robotic US scans over audio and doing some straightforward interactions with patients such as ”high five", Eilers et al. claimed that the acceptance from patients could be enhanced <cit.>. To improve the acceptance of new medical devices in clinical practices, the robotic system with a medical certification can speed up the process in both research and market-driven developments <cit.>. For example, KUKA LBR iiwa has been widely used as the key component for developing RUSS <cit.>. Nevertheless, this comes with a high unit cost and may necessitate the assistance of an experienced engineer for imaging acquisition or routine system maintenance <cit.>. Since the fee will be paid by the end-users, the financial issue will become a practical factor hindering the acceptance from the patients. Most recently, Kosa et al. examined the role of robotics in Intensive Care Medicine and their acceptability to patients and caregivers <cit.>. They concluded that it is still immature to use robots directly handling patients, and close collaborations between roboticists and clinicians are required to advance robotics to benefit the ICU. §.§ Ethical and Legal Issues The ethical and legal issues regarding medical robotics are still not clearly defined, particularly for autonomous systems. The distribution of responsibility between experts and RUSS (or other surgical robotic systems) remains unclear. Clinical translation will also need regulatory acceptance. In order to properly tackle the ethical, regulatory, and legal issues for RUSS, Yang et al. divided surgical robots into six subgroups in terms of autonomy levels: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy <cit.>. To further improve the concept of level of autonomy, Haidegger defined the term “situation awareness" as the operator’s perception, comprehension, and prediction of a robot’s behavior in its environment <cit.>. Then, “situation awareness" is used to distinguish the required level of human supervision. Up to the time of writing this article, commercial surgical robots are still solidly resting at Level-0, while a very large number of high-autonomy surgical robotic systems are waiting for clinical translation <cit.>. Since commercial surgical robots are dominated by a few disproportionately large companies; thereby they have no rush in disrupting the status quo <cit.>. Ethical and legal regulations are critical for clinical translation and further commercialization. The need for such a regulation has been highlighted by various senior researchers in multiple impactful publications recently <cit.>. To establish such regulations for medical robots, O'Sullivan et al. defined three different responsibilities: (1) accountability: the capacity of a system to give an explanation for its actions; (2) liability: the legal liability for potential damages caused by a robot; and (3) culpability: whom and how to implement punishment <cit.>. In addition, Vayena et al. discussed ethical and legal issues for digital health in terms of privacy and security, trust, and accountability <cit.>. As a large amount of data is often necessary for analysis, protecting privacy is undoubtedly important for avoiding misuse. Public trust is of paramount importance. Vayena et al. considered that the creation of a culture of trust will enable all stakeholders to benefit from the development of digital health <cit.>. Similarly, Yang et al. summarised five increasingly pressing topics in terms of ethics for robotics and AI <cit.>. Besides the aforementioned terms like responsibility, this works further emphasized some societal issues such as potential influence on employment and human freedom. Due to the quick evolution of the area of medical robotics, a proper and comprehensive regulatory system will boost a prosperous market and gradually benefit all stakeholders. To deal with the unsolved issues regarding the safety, transparency, and trustworthiness of modern medical devices with a certain level of autonomy, the two leading Standard Development Organizations International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) created the first joint standardization document (IEC/TR 60601-4-1) regarding autonomy for technical developers <cit.>. Recently, Prestes et al. established the first global ontological standard for AI and robotics: IEEE 7007—Ontological Standard for Ethically Driven Robotics and Automation Systems <cit.>. For an in-depth review of the ongoing initiatives regarding regulations, we highly recommend that readers refer to these two articles <cit.>. §.§ Future Perspectives In addition to challenges, there are also numerous opportunities in the field of RUSS, particularly in light of the boom in both fundamental sensor development and advanced AI research. This survey will elaborate on future perspectives from these two aspects. By providing an understanding of the state of the art, we hope it can stimulate a number of exciting ideas. To clarify, the opportunities extend far beyond what are described below. §.§.§ Fundamental Sensing Systems Sensors are essential components of all intelligent systems. Generally, the development of new sensors has a substantial effect on existing systems in numerous ways. To achieve the ultimate goal of an autonomous RUSS, it is necessary to integrate multiple sensing systems mimicking the sophisticated human sensing system. By developing efficient data fusion techniques, redundancy, and multi-modality data would aid in achieving robust and reliable perception results. This applies not only to RUSS but to a vast array of autonomous systems. Most recently, the novel concept and development of US patches have become attractive. Due to the advantages of small size, stretchable probability, and no need for US gel, it is very desired for continuous healthcare monitoring. The traditional US probes are rigid and bulky, making them unsuitable for imaging through nonplanar surfaces. To address this challenge, Hu et al. proposed a stretchable US probe that can conform to and detect nonplanar complex surfaces <cit.>. This soft probe consisted of a 10× 10 array of piezoelectric transducers covered by compliant silicone elastomers, and the results demonstrated that it could be stretched more than 50%. Similarly, Wang et al. developed and tested a skin-conformal ultrasonic phased array to monitor the physiological signals from tissues up to 14 cm <cit.>. To tackle the practical issue that the image quality is highly affected by US gels, Wang et al. designed a bioadhesive US device consisting of a thin and rigid US probe robustly adhered to the skin via a couplant made of a soft, tough, antidehydrating, and bioadhesive hydrogel-elastomer hybrid <cit.>. Based on this device, continuous imaging of internal tissues over days becomes feasible. Most recently, Hu et al. demonstrate a wearable cardiac US imager providing direct cardiac function assessment <cit.>. Such fundamental changes in US probe would open numerous opportunities for revolutionizing the techniques of robot-assisted US imaging. §.§.§ Advanced AI-based RUSS We consider the AI-based RUSS would be another promising direction, where the core task is to improve the intelligence of RUSS. To this end, the research community needs first to improve the computer's understanding of dynamic environments through multi-modality signals. Only when the system owns precise perception abilities, we can further expect and explore the way to make proper decisions autonomously. Several studies have demonstrated that AI-based approaches outperformed conventional image processing methods <cit.>. Benefiting from the accurate segmentation of target objects (e.g., blood vessels), precise state representations will further facilitate the development of autonomous scanning <cit.> or autonomous exploration of standard US planes <cit.>. In addition, advanced learning-based frameworks have the potential to be used to transfer senior sonographers' physiological knowledge and experience to novices. Recent studies in the direction of learning from demonstrations <cit.> implicitly result in an attractive and influential new research topic on recovery of “language of sonography". Hands-on experience is very important and necessary for sonographers. Senior sonographers who can perform flawless US scans are still unable to directly parameterize and intuitively describe the acquisition requirements. However, US examinations are carried out based on their understanding of high-level physiological knowledge. Such knowledge is common among sonographers, although their comprehension may vary slightly due to experience. The concept of recovery of “language of sonography" refers to the underlying understanding of high-level anatomical knowledge. We believe that efforts to retract the “language of sonography" from intuitive demonstrations with multiple signals, such as US images, RGB-D images, force information, probe movement, gaze information, etc., are as valuable and essential as the progress made in robotic sonography itself <cit.>. § DISCUSSION Robotic technologies have demonstrated promising potential to extend the use of US imaging in the healthcare industry, such as remote examinations, and accurate and quantitative control of acquisition parameters. Compared with conventional US examinations, although current RUSS cannot yet show superiority in terms of improving clinical outputs, a number of benefits have been demonstrated. From the perspective of patients, the waiting time for the healthcare intervention was significantly reduced from 144 to 26.5 days <cit.> and their cost was reduced as well <cit.>. As for sonographers, robots bring dexterity as well as reduce work-related musculoskeletal disorders <cit.>. Additionally, RUSS has the potential to make a significant contribution in a variety of clinical scenarios, including performing trauma examinations in pre-hospital settings <cit.>, freeing up a clinician's hand during the intervention <cit.>, and performing routine PAD screening or monitoring without radiation <cit.>. When it comes to trauma scans, it is vital to spot life-threatening intracavitary hemorrhage as soon as possible because this will enable doctors to make prompt treatment decisions to save lives in emergency scenarios. RUSS could be used for reliable and accurate trauma scan identification in pre-hospital settings by fusing precise sensing devices with a cutting-edge learning-based semantic segmentation framework. Continuing the current progress on RUSS requires a deep understanding of how its embedded technologies add value to healthcare practices. Intelligent robotic imaging systems could provide different benefits. On one hand, they can democratize the healthcare by making US examination available at locations in which patient populations do not currently have access to expert sonographers. On the other hand, to maximize the added value of RUSS, it is important to also focus on enabling new types of interventions or new procedures that are impractical or impossible based on traditional US examination, e.g., 3D or 4D visualization of scanned anatomy compensating or embedding physical breathing and heartbeat. Although there is not yet any fully autonomous system for US examinations, autonomy is one of the main objectives of the scientific community. Similar to surgical robotics, autonomous RUSS will be more challenging to commercialize <cit.>, however, due to its nature of offering images and visualization rather than decision making, cutting, and suturing tissues, we believe autonomous RUSS is easier to be certified and productized than autonomous surgical robotic solutions. On the other hand, compared to robotic X-ray and nuclear imaging, RUSS may be harder to certify because it requires direct interaction with patients. Researchers, therefore, need to continue their studies to guarantee the trust in and acceptance of autonomous RUSS by both doctors and patients. The reported results on current autonomous RUSS are still far from maturity and do not perform as well as or outperform clinicians. Most existing research makes simplifying assumptions and often uses artificial setups for their validation. For example, most US servoing approaches (Section <ref>) are validated on phantoms or using simulation rather than on human subjects, and the existing motion and deformation compensation approaches may not perform as well on patients within the complex and dynamic clinical setups. * Could advanced machine learning allow us to learn the “language of sonography" by observing expert sonographers? * Could our RUSS systems understand the physics of imaging and its interaction with dynamic patient physiology? * Could RUSS allow optimizing B-Mode, 3D and 4D image acquisition? * Could advanced sensing and intelligent control allow for guaranteeing reproducibility and safety of scanning procedures? * Could multimodal imaging and pretraining allow RUSS systems to observe and understand the specific anatomy and physiology of each patient? * Could explainable AI enable RUSS systems to report and justify their actions and decisions to physicians? * Could user-centric RUSS design allow smooth and friendly communication between sonographer robots, physician colleagues, and patients? Answering each of these exciting and essential questions requires large multi-disciplinary scientific and engineering communities to gather, communicate and collaborate. The current review paper hopes to play a small role in gathering and highlighting some of the requirements and opening the path for the community to study and analyze the next crucial steps to take. § CONCLUSION This survey has provided a brief picture of the rapidly evolving field of robot-assisted US imaging systems. Starting from the technical developments and clinical translations of various teleoperation systems in the first decade of the new millennium, in Section <ref>, the article summarizes the path the community took to get to its recent research focus on autonomous RUSS, in particular after the booming of machine learning and artificial intelligence throughout the last decade. It is challenging to develop intelligent RUSS solutions, which require a number of advanced capabilities to understand dynamic environments, physics of US imaging, human anatomy and physiology, and thereby to tackle complex cases of diagnostic and interventional imaging. To date, there are no such systems available. This paper aims at reviewing the state of the art and discussing the paths the community has taken or needs to take in the future. The survey shows that the recent progress has demonstrated that RUSS may be able to improve image acquisition and 3D visualization, also taking motion and deformation into account, real-time geometrical (including volumetric) measurements, and in particular their reproducibility. The US handling habits vary among expert sonographers, and cannot be well described using handcrafted features. We believe that in the near future, the development of advanced machine learning will allow for figuring out the underlying “language of sonography" based on expert demonstrations. This can not only allow for autonomous intelligent RUSS development but also for designing US education and training systems, and advanced methodologies for grading and evaluating the performance of human and robotic US examinations. In view of its speed of progress, RUSS has the potential to revolutionize not only the US-based medical interventions themselves but also clinical screening, diagnosis, and robotic-assisted surgery. § DECLARATION OF COMPETING INTEREST The authors report no conflicts of interest. § ACKNOWLEDGMENTS The authors would like to acknowledge the Editors and anonymous reviewers for their time, and implicit contributions to the improvement of the article's thoroughness, readability, and clarity. model2-names.bstauthoryear
http://arxiv.org/abs/2307.09973v1
20230714092619
Source-Free Domain Adaptive Fundus Image Segmentation with Class-Balanced Mean Teacher
[ "Longxiang Tang", "Kai Li", "Chunming He", "Yulun Zhang", "Xiu Li" ]
cs.CV
[ "cs.CV" ]
Class-Balanced Mean Teacher L. Tang et al. Tsinghua Shenzhen International Graduate School, Tsinghua University, China NEC Laboratories America, USA ETH Zurich, Switzerland {lloong.x, li.gml.kai, chunminghe19990224, yulun100}@gmail.com [email protected] Source-Free Domain Adaptive Fundus Image Segmentation with Class-Balanced Mean Teacher Longxiang Tang1 Kai Li2[1] Chunming He1 Yulun Zhang3 Xiu Li1[1] August 12, 2023 ====================================================================================== [1]Corresponding author. This paper studies source-free domain adaptive fundus image segmentation which aims to adapt a pretrained fundus segmentation model to a target domain using unlabeled images. This is a challenging task because it is highly risky to adapt a model only using unlabeled data. Most existing methods tackle this task mainly by designing techniques to carefully generate pseudo labels from the model's predictions and use the pseudo labels to train the model. While often obtaining positive adaption effects, these methods suffer from two major issues. First, they tend to be fairly unstable - incorrect pseudo labels abruptly emerged may cause a catastrophic impact on the model. Second, they fail to consider the severe class imbalance of fundus images where the foreground (e.g., cup) region is usually very small. This paper aims to address these two issues by proposing the Class-Balanced Mean Teacher (CBMT) model. CBMT addresses the unstable issue by proposing a weak-strong augmented mean teacher learning scheme where only the teacher model generates pseudo labels from weakly augmented images to train a student model that takes strongly augmented images as input. The teacher is updated as the moving average of the instantly trained student, which could be noisy. This prevents the teacher model from being abruptly impacted by incorrect pseudo-labels. For the class imbalance issue, CBMT proposes a novel loss calibration approach to highlight foreground classes according to global statistics. Experiments show that CBMT well addresses these two issues and outperforms existing methods on multiple benchmarks. § INTRODUCTION Medical image segmentation plays an essential role in computer-aided diagnosis systems in different applications and has been tremendously advanced in the past few years <cit.>. While the segmentation model <cit.> always requires sufficient labeled data, unsupervised domain adaptation (UDA) approaches have been proposed, learning an adaptive model jointly with unlabeled target domain images and labeled source domain images <cit.>, for example, the adversarial training paradigm<cit.>. Although impressive performance has been achieved, these UDA methods may be limited for some real-world medical image segmentation tasks where labeled source images are not available for adaptation. This is not a rare scenario because medical images are usually highly sensitive in privacy and copyright protection such that labeled source images may not be allowed to be distributed. This motivates the investigation of source-free domain adaptation (SFDA) where adapts a source segmentation model trained on labeled source data (in a private-protected way) to the target domain only using unlabeled data. A few recent SFDA works have been proposed. OSUDA <cit.> utilizes the domain-specific low-order batch statistics and domain-shareable high-order batch statistics, trying to adapt the former and keep the consistency of the latter. SRDA <cit.> minimizes a label-free entropy loss guided with a domain-invariant class-ratio prior. DPL <cit.> introduces pixel-level and class-level pseudo-label denoising schemes to reduce noisy pseudo-labels and select reliable ones. U-D4R <cit.> applies an adaptive class-dependent threshold with the uncertainty-rectified correction to realize better denoising. Although these methods have achieved some success in model adaptation, they still suffer from two major issues. First, they tend to be fairly unstable. Without any supervision signal from labeled data, the model heavily relies on the predictions generated by itself, which are always noisy and could easily make the training process unstable, causing catastrophic error accumulation after several training epochs as shown in Fig. <ref>(a). Some works avoid this problem by only training the model for very limited iterations (only 2 epochs in <cit.>) and selecting the best-performing model during the whole training process for testing. However, this does not fully utilize the data and it is non-trivial to select the best-performing model for this unsupervised learning task. Second, they failed to consider the severe foreground and background imbalance of fundus images where the foreground (e.g., cup) region is usually very small (as shown in Fig. <ref>(b)). This oversight could also lead to a model degradation due to the dominate background learning signal. In this paper, we propose the Class-Balanced Mean Teacher (CBMT) method to address the limitations of existing methods. To mitigate the negative impacts of incorrect pseudo labels, we propose a weak-strong augmented mean teacher learning scheme which involves a teacher model and a student model that are both initialized from the source model. We use the teacher to generate pseudo label from a weakly augmented image, and train the student that takes strongly augmented version of the same image as input. We do not train the teacher model directly by back-propagation but update its weights as the moving average of the student model. This prevents the teacher model from being abruptly impacted by incorrect pseudo labels and meanwhile accumulates new knowledge learned by the student model. To address the imbalance between foreground and background, we propose to calibrate the segmentation loss and highlight the foreground class, based on the prediction statistics derived from the global information. We maintain a prediction bank to capture global information, which is considered more reliable than that inside one image. Our contributions can be summarized as follows: (1) We propose the weak-strong augmented mean teacher learning scheme to address the stable issue of existing methods. (2) We propose the novel global knowledge-guided loss calibration technique to address the foreground and background imbalance problem. (3) Our proposed CBMT reaches state-of-the-art performance on two popular benchmarks for adaptive fundus image segmentation. § METHOD Source-Free Domain Adaptive (SFDA) fundus image segmentation aims to adapt a source model h, trained with N_S labeled source images 𝒮={(X_i, Y_i)}_i=1^N_S, to the target domain using only N_T unlabeled target images 𝒯={X_i}_i=1^N_T. Y_i∈{0, 1}^H× W × C is the ground truth, and H, W, and C denote the image height, width, and class number, respectively. A vanilla pseudo-labeling-based method generates pseudo labels ŷ∈ℝ^C from the sigmoided model prediction p=h(x) for each pixel x∈ X_i with source model h: ŷ_k=1[p_k > γ], where 1 is the indicator function and γ∈[0, 1] is the probability threshold for transferring soft probability to hard label. p_k and y_k is the k-th dimension of p and y, respectively, denoting the prediction and pseudo label for class k. Then (x,ŷ) is utilized to train the source model h with binary cross entropy loss: L_bce= 𝔼_x∼ X_i [ŷlog(p)+(1-ŷ)log(1-p)] Most existing SFDA works refine this vanilla method by proposing techniques to calibrate p and get better pseudo label ŷ, or measure the uncertainty of p and apply a weight when using ŷ for computing the loss <cit.>. While achieving improved performance, these methods still suffer from the unstable issue because noisy ŷ will directly impact h, and the error will accumulate since then the predictions of h will be used for pseudo labeling. Another problem with this method is that they neglect the imbalance of the foreground and background pixels in fungus images, where the foreground region is small. Consequently, the second term in Eq. (<ref>) will dominate the loss, which is undesirable. Our proposed CBMT model addresses the two problems by proposing the weak-strong augmented mean teacher learning scheme and the global knowledge-guided loss calibration technique. Fig. <ref>(c) shows the framework of CBMT. §.§ Weak-Strong Augmented Mean Teacher To avoid error accumulation and achieve a robust training process, we introduce the weak-strong augmented mean teacher learning scheme where there is a teacher model h_t and a student model h_s both initialized from the source model h. We generate pseudo labels with h_t and use the pseudo labels to train h_s. To enhance generalization performance, we further introduce a weak-strong augmentation mechanism that feeds weakly and strongly augmented images to the teacher model and the student model, respectively. Concretely, for each image X_i, we generate a weakly-augmented version X^w_i by using image flipping and resizing. Meanwhile, we generate a strongly-augmented version X^s_i. The strong augmentations we used include a random eraser, contrast adjustment, and impulse noises. For each pixel x^w∈ X^w_i, we generate pseudo label ŷ^w=h_t(x) by the teacher model h_t with Eq. (<ref>). Then, we train the student model h_s with ℒ = 𝔼_x^s∼ X^s_i, ŷ^w [ℒ̃_bce], where ℒ̃_bce is the refined binary cross entropy loss which we will introduce later. It is based on Eq. (<ref>) but addresses the fore- and back-ground imbalance problem. The weakly-strong augmentation mechanism has two main benefits. First, since fundus image datasets are always on a small scale, the model could easily get overfitted due to the insufficient training data issue. To alleviate it, we enhance the diversity of the training set by introducing image augmentation techniques. Second, learning with different random augmentations performs as a consistency regularizer constraining images with similar semantics to the same class, which forms a more distinguishable feature representation. We update the student model by back-propagating the loss defined in Eq. (<ref>). But for the teacher model, we update it as the exponential moving average (EMA) of the student model as, θ̃←λθ̃+(1-λ)θ, where θ̃, θ are the teacher and student model weights separately. Instead of updating the model with gradient directly, we define the teacher model as the exponential moving average of students, which makes the teacher model more consistent along the adaptation process. With this, we could train a model for a relatively long process and safely choose the final model without accuracy validation. From another perspective, the teacher model can be interpreted as a temporal ensemble of students in different time steps <cit.>, which enhances the robustness of the teacher model. §.§ Global Knowledge Guided Loss Calibration For a fundas image, the foreground object (e.g., cup) is usually quite small and most pixel will the background. If we update the student model with Eq. (<ref>), the background class will dominate the loss, which dilutes the supervision signals for the foreground class. The proposed global knowledge guided loss calibration technique aims to address this problem. A naive way to address the foreground and background imbalance is to calculate the numbers of pixels falling into the two categories, respectively, within each individual image and devise a loss weighting function based on the numbers. This strategy may work well for the standard supervised learning tasks, where the labels are reliable. But with pseudo labels, it is too risky to conduct the statistical analysis based on a single image. To remedy this, we analyze the class imbalance across the whole dataset, and use this global knowledge to calibrate our loss for each individual image. Specifically, we store the predictions of pixels from all images and maintain the mean loss for foreground and background as, η_k^fg = ∑_iℒ_i,k·1[ŷ_i,k=1]/∑_i1[ŷ_i,k=1]; η_k^bg = ∑_iℒ_i,k·1[ŷ_i,k=0]/∑_i1[ŷ_i,k=0] where ℒ is the segmentation loss mentioned above, and “fg” and “bg” represent foreground/background. The reason we use the mean of the loss, rather than the number of pixels, is that the loss of each pixel indicates the “hardness“ of each pixel according to the pseudo ground truth. This gives more weight to those more informative pixels, thus more global knowledge is considered. With each average loss, the corresponding learning scheme could be further calibrated. We utilize the ratio of η_k^fg to η_k^bg to weight background loss ℒ_k^bg: ℒ̃_bce= 𝔼_x∼ X_i, k∼ C [ŷ_klog(p_k)+η_k^fg/η_k^bg(1-ŷ_̂k̂)log(1-p_k)] The calibrated loss ensures fair learning among different classes, therefore alleviating model degradation issues caused by class imbalance. Since most predictions are usually highly confident (very close to 0 or 1), they are thus less informative. We need to only include pixels with relatively large loss scales to compute mean loss. We realize this by adopting constraint threshold α to select pixels: |f(x_i)-γ|/|ŷ_̂î-γ|>α, where α is set by default to 0.2. α represents the lower bound threshold of normalized prediction, which can filter well-segmented uninformative pixels out. § EXPERIMENTS Implementation details[1]. [1]The code can be found in <https://github.com/lloongx/SFDA-CBMT> We apply the Deeplabv3+ <cit.> with MobileNetV2 <cit.> backbone as our segmentation model, following the previous works <cit.> for a fair comparison. For model optimization, we use Adam optimizer with 0.9 and 0.99 momentum coefficients. During the source model training stage, the initial learning rate is set to 1e-3 and decayed by 0.98 every epoch, and the training lasts 200 epochs. At the source-free domain adaptation stage, the teacher and student model are first initialized by the source model, and the EMA update scheme is applied between them for a total of 20 epochs with a learning rate of 5e-4. Loss calibration parameter η is computed every epoch and implemented on the class cup. The output probability threshold γ is set as 0.75 according to previous study <cit.> and model EMA update rate λ is 0.98 by default. We implement our method with PyTorch on one NVIDIA 3090 GPU and set batch size as 8 when adaptation. Datasets and metrics. We evaluate our method on widely-used fundus optic disc and cup segmentation datasets from different clinical centers. Following previous works, We choose the REFUGE challenge training set <cit.> as the source domain and adapt the model to two target domains: RIM-ONE-r3 <cit.> and Drishti-GS <cit.> datasets for evaluation. Quantitatively, the source domain consists of 320/80 fundus images for training/testing with pixel-wise optic disc and cup segmentation annotation, while the target domains have 99/60 and 50/51 images. Same as <cit.>, the fundus images are cropped to 512× 512 as ROI regions. We compare our CBMT model with several state-of-the-art domain adaptation methods, including UDA methods BEAL <cit.> and AdvEnt <cit.> and SFDA methods: SRDA <cit.>, DAE <cit.> and DPL <cit.>. More comparisons with U-D4R <cit.> under other adaptation settings could be found in supplementary materials. General metrics for segmentation tasks are used for model performance evaluation, including the Dice coefficient and Average Symmetric Surface Distance (ASSD). The dice coefficient (the higher the better) gives pixel-level overlap results, and ASSD (the lower the better) indicates prediction boundary accuracy. §.§ Experimental Results The quantitative evaluation results are shown in Tab. <ref>. We include the without adaptation results from <cit.> as a lower bound, and the supervised learning results from <cit.> as an upper bound, same as <cit.>. As shown in the table, both two quantitative metric results perform better than previous state-of-the-art SFDA methods and even show an improvement against traditional UDA methods on some metrics. Especially in the RIM-ONE-r3 dataset, our CBMT gains a great performance increase than previous works (dice gains by 3.23 on disc), because the domain shift issue is severer here and has big potential for improvement. Moreover, CBMT alleviates the need for precise tuning of hyper-parameters. Here we could set a relatively long training procedure (our epoch number is 10 times that of <cit.>), and safely select the last checkpoint as our final result without concerning about model degradation issue, which is crucial in real-world clinical source-free domain adaptation application. §.§ Further Analyses Ablation study. In order to assess the contribution of each component to the final performance, we conducted an ablation study on the main modules of CBMT, as summarized in Table <ref>. Note that we reduced the learning rates by a factor of 20 for the experiments of the vanilla pseudo-labeling method to get comparable performance because models are prone to degradation without EMA updating. As observed in quantitative results, the EMA update strategy avoids the model from degradation, which the vanilla pseudo-labeling paradigm suffers from. Image augmentation and loss calibration also boost the model accuracy, and the highest performance is achieved with both. The loss calibration module achieves more improvement in its solution to class imbalance, while image augmentation is easy to implement and plug-and-play under various circumstances. Hyper-parameter sensitivity analysis. We further investigate the impact of different hyper-parameter. Fig <ref>(a) presents the accuracy with different EMA update rate parameters λ. It demonstrates that both too low and too high update rates would cause a drop in performance, which is quite intuitive: a higher λ leads to inconsistency between the teacher and student, and thus teacher can hardly learn knowledge from the student; On the other hand, a lower λ will always keep teacher and student close, making it degenerated to vanilla pseudo-labeling. But within a reasonable range, the model is not sensitive to update rate λ. To evaluate the variation of the loss calibration weight η_k^fg/η_k^bg with different constraint thresholds α, we present the results in Tab. <ref>. As we discussed in Sec. <ref>, most pixels in an image are well-classified, and if we simply calculate with all pixels (i.e. α=0), as shown in the first column, the mean loss of background will be severely underestimated due to the large quantity of zero-loss pixel. Besides, as α changes, the calibration weight varies little, indicating the robustness of our calibration technique to threshold α. The effectiveness of loss calibration to balance class. The class imbalance problem can cause misalignment in the learning processes of different classes, leading to a gradual decrease of predicted foreground area. This can ultimately result in model degradation. As shown in Fig. <ref>(b), neglecting the issue of class imbalance can cause a significant drop in the predicted pixel quantity of the class “cup” during training, and finally leads to a performance drop. Loss calibration performs a theoretical investigation and proposes an effective technique to alleviate this issue by balancing loss with global context. § CONCLUSION In this work, we propose a class-balanced mean teacher framework to realize robust SFDA learning for more realistic clinical application. Based on the observation that model suffers from degradation issues during adaptation training, we introduce a mean teacher strategy to update the model via an exponential moving average way, which alleviates error accumulation. Meanwhile, by investigating the foreground and background imbalance problem, we present a global knowledge guided loss calibration module. Experiments on two fundus image segmentation datasets show that CBMT outperforms previous SFDA methods. §.§.§ Acknowledgement. This work was partly supported by Shenzhen Key Laboratory of next generation interactive media innovative technology (No: ZDSYS202 10623092001004). splncs04.bst
http://arxiv.org/abs/2307.05808v1
20230711211344
Sampling Faraday rotation sky of IllustrisTNG50: I. Imprint of the magnetised circumgalactic medium around Milky Way-like galaxies
[ "Seoyoung Lyla Jung", "N. M. McClure-Griffiths", "Ruediger Pakmor", "Yik Ki Ma", "Alex S. Hill", "Cameron L. Van Eck", "Craig S. Anderson" ]
astro-ph.GA
[ "astro-ph.GA" ]
firstpage–lastpage Excitements and Concerns in the Post-ChatGPT Era: Deciphering Public Perception of AI through Social Media Analysis Weihong Qi Department of Political Science University of Rochester Rochester, USA [email protected] Jinsheng Pan, Hanjia Lyu, Jiebo Luo Department of Computer Science University of Rochester Rochester, USA {jpan24, hlyu5}@ur.rochester.edu, [email protected] ============================================================================================================================================================================================================================================================================================ Faraday rotation measure (RM) is arguably the most practical observational tracer of magnetic fields in the diffuse circumgalactic medium (CGM). We sample synthetic Faraday rotation skies of Milky Way-like galaxies in IllustrisTNG50 by placing an observer inside the galaxies at a solar circle-like position. Our synthetic RM grids emulate specifications of current and upcoming surveys; the NRAO VLA Sky Survey (NVSS), the Polarisation Sky Survey of the Universe's Magnetism (POSSUM), and a future Square Kilometre Array (SKA1-mid) polarisation survey. It has been suggested that magnetic fields regulate the survival of high-velocity clouds. However, there is only a small number of observational detections of magnetised clouds thus far. In the first part of the paper, we test conditions for the detection of magnetised circumgalactic clouds. Based on the synthetic RM samplings of clouds in the simulations, we predict upcoming polarimetric surveys will open new opportunities for the detection of even low-mass and distant clouds. In the second part of the paper, we investigate the imprint of the CGM in the all-sky RM distribution. We test whether the RM variation produced by the CGM is correlated with global galaxy properties, such as distance to a satellite, specific star formation rate, neutral hydrogen covering fraction, and accretion rate to the supermassive black hole. We argue that the observed fluctuation in the RM measurements, which has been considered an indication of intergalactic magnetic fields, might in fact incorporate a significant contribution of the Milky Way CGM. polarization – magnetic fields – (magnetohydrodynamics) MHD – Galaxy: halo – galaxies: magnetic fields – methods: numerical § INTRODUCTION Galaxies are surrounded by the circumgalactic medium (CGM) that evolves interactively with the interstellar medium (ISM) and the intergalactic medium (IGM) beyond the galactic halo. While most of the volume within the halo is filled with hot diffuse gas, observations report the detection of denser and cooler phase gas clouds around the Milky Way as well as nearby galaxies (e.g., ; ). Observationally, such circumgalactic gas clouds around the Milky Way are referred to as high- or intermediate-velocity clouds (HVCs or IVCs; see and for a review). HVCs and IVCs are associated with diverse thermodynamical mechanisms in the CGM, for example, cold accretion of cosmic filaments, cooling of energetic outflows from the Galactic disk, and ram pressure/tidal stripping of cold gas in satellite galaxies (; ; ; ; ; ; ). The magnetic field strength in the galactic halo is usually very weak (∼0.1μ G, ; ; ), but fast-moving clouds can sweep up and stretch the ambient halo magnetic field lines and amplify the field strength around them. Magneto-hydrodynamic (MHD) numerical simulations suggest that magnetic fields draped and amplified around a cloud regulate the mixing of gas at the cloud-halo interface, thus affecting the amount of radiative cooling in the cloud system, and eventually helping the survival of the cloud throughout the passage within the halo (; ; ; ; ; ; ; ; ; ; ; ; ; ; ). Faraday rotation of background polarisation sources (e.g., quasars and pulsars) has been a major observational tracer for searching for magnetised circumgalactic clouds around the Milky Way. The rotation measure (RM) is a measure of the change in the polarisation angle due to Faraday rotation as linearly polarised radiation propagates within a magneto-ionic medium: RM = 0.812∫_ observer^ sourcen_ e(r)B_∥(r)dr, where RM is in units of rad m^-2, n_ e is the electron density in cm^-3, B_∥ is the magnetic field strength along the line-of-sight in μ G, and r is a path length in pc. There have been reports of three high-velocity HI complexes around the Milky Way that spatially overlap with the excessive RM in the Faraday rotation sky: the Magellanic Leading Arm (); the Smith cloud (; ), and the Magellanic bridge (). Although, the RM excess in the Magellanic Leading Arm region has recently been demonstrated to be a contribution from an overlapping object (). There are more detections when extending the search to IVCs or ionized clouds (e.g., ). Still, little is known about the magnetic field properties of the vast majority of the hundreds of circumgalactic clouds. One goal of this study is to examine the detection statistics of magnetised HVCs around simulated Milky Way-like galaxies. To do so, we perform synthetic Faraday rotation measure observations of the high-resolution cosmological suite of simulations IllustrisTNG50 (; ). We sample the RM distribution at different source densities and RM precision. By doing so, we investigate conditions for the detection of magnetised circumgalactic clouds and evaluate the extent to which current and future radio polarimetric surveys are capable of detecting these clouds given their observational specifications. The second main goal of our study is to examine the imprint of the CGM as a whole (not limiting ourselves to HI clouds) onto the Faraday rotation sky of Milky Way-like galaxies in IllustrisTNG50. Quantifying the contribution of the CGM to the all-sky RM distribution is an important piece of information that can assist with the detection of intergalactic magnetic fields, for example, in cosmic large-scale structures. This is because the Faraday rotation at the Milky Way CGM inevitably adds to the net Faraday rotation of any extragalactic polarised radiation coming towards the observer. There have been suggestions from theoretical studies for methods to separate galactic and extragalactic Faraday rotation, e.g., the preferred angular scale of the imprint of Faraday rotation at the cosmic large-scale structure (). However, the range of angular scales probed by observed all-sky RM distribution has not been sufficiently small, limited by the polarised source density of RM catalogues. For that reason, probing the nature of intergalactic magnetic fields has been held off as an area of interest awaiting future polarimetric surveys and instruments (). We make clear that the motivation of this paper is not in testing the power of the cosmological simulations in reproducing observed properties of the CGM. There are a number of earlier studies demonstrating that the CGM properties of Milky Way-like galaxies in current state-of-the-art cosmological simulations are generally comparable to observations (e.g., ). We rather refer interested readers to related publications exploring the physical nature of cold circumgalactic clouds in IllustrisTNG50, for example, <cit.>, <cit.>. In this paper, we focus on utilizing the existing MHD cosmological simulations to assess the potential observability of CGM signatures under the assumption that IllustrisTNG magnetised clouds are similar to those observed in the Milky Way. This paper is structured as follows. We provide an overview of the IllustrisTNG50 simulation in Section <ref> and define the Milky-Way-like galaxy sample in Section <ref>. We describe how we define the CGM in simulations and identify individual circumgalactic clouds in Sections <ref> and <ref>. In Section <ref>, we evaluate the capabilities of current and future radio polarimetric surveys in detecting magnetised HI circumgalactic clouds based on our synthetic Faraday rotation samplings. In Section <ref>, we quantify the contribution of the entire CGM to the all-sky RM distribution and compare the results of simulated galaxies to the Milky Way observations. Section <ref> is the summary and conclusion. We discuss the effect of the simulation resolution in Appendix <ref>. § METHOD §.§ A brief overview of the IllustrisTNG The IllustrisTNG project (Illustris The Next Generation, ; ; ; ; ) is a suite of MHD cosmological simulations using the AREPO (; ). The simulations assume a <cit.> cosmology (Ω_Λ,0= 0.6911, Ω_ m, 0= 0.3089, Ω_ b, 0=0.0486, σ_ 8= 0.8159, n_ s = 0.9667 , and h = 0.6774). In this study, we focus on IllustrisTNG50 which achieves the best spatial and mass resolution among the IllustrisTNG suite. Its relatively small simulation volume (∼ 50^3 Mpc^3 co-moving box) compared to TNG100 (∼ 100^3 Mpc^3) and TNG300 (∼ 300^3 Mpc^3) is not a major hurdle for our study. We have ensured a sufficient number of samples, i.e., Milky Way-like galaxies and circumgalactic clouds, as will be presented in the following sections. Detailed descriptions of the simulations appear in many earlier publications referenced in this paper. Specifically, we refer interested readers to the IllustrisTNG50 introduction papers for details about the simulations (; ). A comprehensive description of sub-grid model treatments (e.g., star formation, chemical evolution, radiative cooling, stellar/black hole feedback) and numerical methods for the simulations are provided in the TNG methods papers (; ). Here, we provide a brief introduction to the IllustrisTNG50 suite. Table <ref> summarises the two simulations we utilize in this paper. In short, TNG50-1 is a fiducial simulation and TNG50-2 is its lower-resolution counterpart that we use for a resolution test in Appendix <ref>. Other than the resolution settings (the initial number of dark matter particles N_ DM and gas cells, N_ gas; the mean dark matter mass resolution, m_ DM, and that of baryon, m_ gas; the softening length of collisionless particles, ϵ_ DM, ⋆; and the minimum gravitational force softening length for gas cells, ϵ_ gas, min), all other input parameters are identical between the two simulations. The nature of the moving-mesh code AREPO refines denser structures with a larger number of smaller cells. Therefore, cold circumgalactic clouds are well resolved with ∼ 100-200 pc resolution in TNG50-1 according to investigations by <cit.>. Magnetic fields evolve self-consistently in IllustrisTNG. The simulations initially start from a homogeneous seed magnetic field with the field strength of 10^-14 G (comoving unit) at z=127. AREPO advances the magnetic fields by numerically solving the ideal MHD equations (see ). The divergence of magnetic fields (∇·B) is controlled using a divergence-cleaning algorithm by <cit.>. Earlier studies have shown that the magnetic field properties at low redshifts are insensitive to the seed field (). §.§ Milky Way-like sample selection Individual halos in the simulations are identified from the distribution of dark matter particles using the friends-of-friends (FoF) algorithm. Gravitationally bound substructures within the FoF halos, i.e., galaxies and/or subhalos, are identified using the Subfind algorithm (). This study focuses on the Milky Way analogy that we define based on the halo mass (M_ 200) and the star formation rate: 5×10^11<M_ 200/M_⊙<3×10^12 and 0.5<SFR/( M_⊙yr^-1)<1.5. The star formation rate is measured within 30 kpc from the centre of the halo and based on the total mass of stars formed in the last 250 Myr. In addition to the halo mass and the star formation rate, we adopt an additional criterion based on the kinematic disk-to-total ratio f_ rot>0.7, where f_ rot is the mass fraction of the rotating gas component to the total gas in a galaxy. The exact definition of the rotating gas and the characteristic size of galaxies will be presented shortly. Finally, we exclude merging galaxies that have substructures that are (i) within 30 kpc from the centre of the halo and (ii) more massive than 10% of the stellar mass of the central galaxy. There are 56 halos in the TNG50-1 simulation volume at z=0 that match all the above criteria. §.§ Separation between the galactic ISM and the CGM In this work, we utilize any gas within 300 kpc from the centre of a halo. This boundary is slightly larger than the virial radius of halos in our sample (169<R_ 200/ kpc<298). In order to focus on the CGM of the simulated galaxies, we separate the galactic disk of the host galaxies and their CGM. We do not distinguish other substructures within the halo such as satellite galaxies and their own CGM. Instead, we consider them as the collective CGM of the host galaxy. We use both the spatial and kinematic distributions of gas cells in the simulations to separate the ISM and the CGM. First, we calculate the orbital circularity parameter of each gas element defined as follows (): ϵ_ J=J_ z/J_ circ(E), where J_ z is the specific angular momentum of a gas element along the net spin-axis of a galaxy and J_ circ(E) is the specific angular momentum of the gas if it was orbiting in a circular orbit when the orbital energy (E) is fixed. The rotational axis of a galaxy is identified by calculating the net angular momentum vector of young stars (age <1 Gyr) within 0.01 R_ 200. Gas cells that follow the net rotation of a galaxy by definition have ϵ_ J close to 1. In this work, we define gas cells with ϵ_ J>0.7 as the rotating component. Along with the filter based on the orbital circularity parameter, we impose spatial filtering to take care of gas elements in the CGM that happen to have their rotational axis aligned with the bulk rotation of the galactic ISM. We determine the radius (r_ disk) where the mean density of the rotating gas (ϵ_ J>0.7) drops below 1% of the mean density within the central 10 kpc of the galaxy. In brief summary, we define the rotating galactic ISM as gas cells that have (i) the orbital circularity ϵ_ J>0.7 and (ii) the distance from the galactic centre smaller than r_ disk. We leave out gas cells that meet these criteria in our analysis of the CGM. Note that our sample of circumgalactic clouds includes the ISM of satellite galaxies as well as clouds within the halos of satellite galaxies, some of which are possibly brought into the host halo system with the infall of satellites. §.§ All-sky projection and cloud identification To perform synthetic observations of the simulated galaxies, we place a mock observer at a location on the galactic mid-plane at random azimuth 8 kpc away from the galactic centre (i.e., the solar radius). Then we transform the 3D coordinates of gas cells in the simulation domain from the Cartesian coordinate (x, y, z) to the galactic coordinate of the mock observer (l, b, d), where d is the distance from the observer and galactic coordinates are defined the same as the conventional Milky Way coordinates: the galactic longitude (l) varies between 0 and 360 ^∘ where the galactic centre is at l=0 ^∘ and the galactic latitude (b) varies between -90 and 90 ^∘ with the galactic disk mid-plane at b=0^∘. The reprojected (l, b, d) grid is a uniform grid as opposed to the Voronoi tessellating polyhedrons partitioning the simulation volume. The angular resolution of the grid is Δ l=Δ b = 20 arcmin and for the line of sight integrals we sample along each sightline with a spatial resolution of Δ d = 100 pc unless a denser sampling of the sky is explicitly necessary or specified. In such cases, we use Δ l=Δ b = 6 arcmin and Δ d = 50 pc. At a typical distance to the circumgalactic clouds identified in the simulation (50 kpc), the angular resolution of the fiducial grid corresponds to ∼ 300 pc physical size which is comparable to the simulation's spatial resolution for cold circumgalactic clouds. For computing the RM along a sightline, we follow the same method described in <cit.>. The thermal electron density (n_ e) of gas cells in the simulations is calculated differently in star-forming gas and non-star-forming gas as explained in their paper. In this paper, especially in Section <ref>, we frequently refer to individual HI circumgalactic cloud complexes and discuss their properties. For identifying these HI overdensities in the CGM, we use the friends-of-friends algorithm. For any cells in the (l, b, d) grid with the HI column density n_ HI>3×10^16 cm^-2 (≈ 10^-4 cm^-3 physical density given the 100 pc grid), we group them based on the FoF threshold separation of 3 cells (=1^∘ in the l- and b-space and 300 pc in the d-space). The top left panel of Fig. <ref> shows the 3D distribution of the identified HI circumgalactic clouds surrounding one of the sample galaxies (halo id: 82). Individual clouds are coloured by their HI mass in log-scale and the galactic disk at the centre of the sphere is shown in grey colour. The top right panel shows the all-sky distribution of the clouds in the Mollweide projection viewed by a mock observer inside the galaxy. The colour of each cloud is the same as the panel on the left and the boundary of the clouds shown in this panel is where the HI column density is higher than n_ HI>10^-18 cm^-2. Various physical properties of cold CGM clouds in IllustrisTNG50 simulations are explored by <cit.>. Although we do not adopt the same cloud identification criteria as their work, we expect the properties of our clouds to be overall similar to what is presented in their paper. § DETECTION STATISTICS FOR MAGNETISED HI CLOUDS §.§ Obstacles for the detection There are several obstacles that make observations of magnetised circumgalactic clouds using RM grids challenging. We start by introducing some of them. * Limited polarised source density of RM grids A necessary condition for the detection of magnetised clouds is to have a statistically meaningful number of polarised sources in the background of the cloud of interest. The source density of RM grids is decided based on the sensitivity of the polarimetric observation. Based on deep observations of faint extragalactic polarised sources, <cit.> estimate the number distribution of the polarised sources follows N(>p)∼ 45*(p/30 μ Jy)^-0.6 at 1.4 GHz, where p is the detection limit of the polarised intensity and N(>p) is the number of polarised sources above a certain intensity p per square degree. The RM source catalogue of the NRAO VLA Sky Survey (NVSS, ; ) has ≈ 1 source/deg^2. Thanks to its wide sky coverage (82%), NVSS has been a major contributor to the discovery of candidates for magnetised HVCs (; ). However, any cloud complexes of an angular size less than a few square degrees are left out of systematic searches using NVSS as the RM grid density is not sufficient to ensure enough polarised sources overlap with the clouds to draw a statistically firm conclusion. As observation sensitivity improves, upcoming polarimetric surveys are expected to provide greatly enhanced polarised source densities. * RM measurement error The precision of RM measurements derived using the RM synthesis technique (; ) is defined as |RM_ err| = δϕ/2 (S/N), where S/N is the polarised signal-to-noise ratio conventionally set to be S/N6 for reliable RM measurements () and δϕ is the resolution of the Faraday spectra, i.e., the full-width half maximum of the RM spread function, which can be estimated as δϕ≈ 2√(3)/ (λ_ max^2-λ_ min^2), where λ_ max and λ_ min are the upper and the lower limit of the wavelength coverage of the polarimetric observation. For signals from magnetised clouds to be confirmed with adequate statistical significance, the ensemble average of the RM produced by the clouds needs to be overall sufficiently larger than the error distribution. * Complex physical structure of clouds MHD models of fast-moving clouds in a weakly magnetised medium commonly show magnetic field lines draped around the clouds (e.g., ; ; ). <cit.> successfully demonstrate that the observed RM pattern across the Smith Cloud resembles what is expected when projecting a simple draped magnetic field configuration into a 2D plane. However, it should be noted that many MHD simulations mentioned above assume a spherically symmetric cloud with a uniform-density core, whereas observations clearly show clumpy structures in HI HVCs. There are suggestions that the complex density structure of a cloud leads to complex magnetic field configuration as magnetic field lines can drape individual overdensities of the cloud (). Such complex n_ e and B_∥ structures along lines-of-sights through a cloud can cancel out much of the RM excess produced along the sightlines. Clouds in cosmological simulations self-consistently form and evolve. Therefore, such internal cancellations of RM signals are inherently taken into account in our detectability estimates in this study as long as the resolution of the simulations allows. * Confusion from the Galactic foreground As indicated in equation <ref>, any electron overdensities or enhanced magnetic fields between a source and an observer contribute to the observed RM of a polarised source. As we are surrounded by the Milky Way ISM, any sightlines towards extragalactic polarised sources inevitably suffer from possibilities for confusion from the Galactic foreground. Our ability to identify magnetised circumgalactic clouds strongly depends on how well we subtract the Faraday rotation at foreground Faraday screens in the region of interest. Simple models of the Milky Way foreground RM structure, such as a 2D surface fit to sources off the cloud-of-interest (e.g., in the Smith Cloud region), are sometimes sufficient to take care of the Milky Way foreground. However, such a simple approximation does not hold when the foreground Faraday screens are more complex. One example of such a complex field is the Magellanic Leading Arm region, which happens to have a supernova remnant in the foreground with a strikingly similar angular size to the Magellanic Leading Arm (). In this case, it is impossible to draw definitive conclusions about the magnetic field properties of the Magellanic Leading Arm from RM measurements from extragalactic sources alone. The determination of the true Galactic Faraday rotating foreground is highly topical and often advances through investigations of diverse observational tracers across a range of wavelengths (e.g., ). While the large-scale coherent pattern (above the order of a few degrees) in all-sky RM set by the global ISM characteristics can be approximated using smoothed all-sky RM maps (e.g., ), fluctuations in the RM at smaller scales (; ) are harder to constrain due to the stochastic nature of the turbulent magnetised ISM. For simplicity, in this paper, we assume the Galactic foreground has been taken care of in a complete manner. * RM variations of various origins Similarly to points (ii) and (iv) above, the RM excess generated by magnetised HVCs needs to be large enough to stand out from other sources generating observed RM variations (e.g., medium directly associated with sources themselves and the IGM) in order to hold the statistical significance of detection. According to <cit.>, the observed standard deviation in RM independent of the Galactic latitude is ≈ 6.2 rad m^-2. Although this scatter has been often referred to as an extragalactic contribution, a part of the scatter inevitably comes from the Milky Way CGM, which is also independent of the Galactic latitude. We will continue the discussion on this topic in Section <ref>. §.§ Detection rates of clouds in the simulations In this section, we evaluate whether individual clouds identified in the simulations are detectable with the given precision and sensitivity of synthetic polarimetry observations, directly addressing points (i) and (ii) above. To do so, we construct synthetic RM grids around the clouds with varying source densities (i.e., the number density of sightlines sampled by a mock observer inside simulated galaxies). Since cold circumgalactic clouds self-consistently form and evolve in the cosmological simulations, our analysis inherently takes into account the possible complexity of the magnetic field and density structures around the clouds (i.e., point iii above) as far as the resolution of the simulations allows. Parameters for the synthetic samplings are chosen to emulate known specifications of current and upcoming polarimetric surveys; namely the NVSS, The Polarisation Sky Survey of the Universe's Magnetism (POSSUM) using the Australian Square Kilometre Array Pathfinder (ASKAP), and a future polarisation survey using the Square Kilometre Array (SKA1-mid). Below are brief descriptions of the surveys relevant to our synthetic sampling. * The NVSS RM catalogue () has on average one RM measurement per square degree. The black line in Fig. <ref> shows the distribution of 1σ error given by the catalogue multiplied by a factor of 1.22 (see for a reason for the scaling). The vertical dashed line of the same colour shows the mean value at 12.9 rad m^-2. * POSSUM is one of the major ongoing surveys of ASKAP (). The sensitivity (≈18 μ Jy beam^-1) and the bandwidth (800-1088 MHz; band 1) of the observations promise the average polarised source density of ≈ 25 deg^-2 (based on equation <ref>, see also ) or even higher[Note that <cit.> count sources observed at 1.4 GHz. At the slightly lower frequency range of POSSUM (0.9 GHz), radio sources are in general slightly brighter and therefore POSSUM is likely to achieve an even higher source density.]. The expected width of the RM spread function is 54 rad m^-2 (from equation <ref>). In Fig. <ref>, we show the distribution of expected |RM_ err| of POSSUM sources and its mean in blue lines. Descriptions of how we calculate the expected RM_ err distribution will be presented shortly in the following paragraphs. * As the Square Kilometre Array (SKA) is on the way, there are ongoing discussions on requirements and expectations for an optimal polarimetric survey in the SKA era. In this paper, we borrow the specifications of an RM grid survey present in <cit.> utilizing SKA1 mid-frequency band 2 (950-1760 MHz). Despite the slightly higher frequency range compared to POSSUM, the broader bandwidth will provide a narrower RM spread function, therefore, a slightly better RM accuracy. See the red line in Fig. <ref> for the expected distribution of |RM_ err|. The polarised source density of ≈ 60 deg^-2 is expected given the suggested sensitivity of 4 μ Jy beam^-1. The first two columns of Table <ref> summarize the choice of the sampling density and the mean value of the |RM_ err| distribution for each of our synthetic RM sampling (hereafter, Mock-NVSS, Mock-POSSUM, and Mock-SKA). We will explain the detection rate presented in columns 3, 4, and 5 shortly. The central panels of Fig. <ref> show the three synthetic RM grids sampled on one of the HI circumgalactic clouds in the simulations. For each cloud identified in the simulations, we calculate RM_ HVC along randomly selected sightlines by integrating n_ e B_∥ as per equation <ref> (upper central panels in Fig. <ref>). The number of RM_ HVC samples is decided based on the sampling densities. We clarify that RM_ HVC we refer to in this paper is the net Faraday rotation within a localized region enclosing a cloud. We effectively disentangle possible overlap between multiple clouds along a sightline by restricting the range of integration to [d_ min-Δ d, d_ max+Δ d], where d_ min and d_ max are the minimum and the maximum span of a cloud in the distance domain and Δ d = 100 pc has been added as an additional buffer to the integration range. Then, we incorporate the measurement errors of RM observations (RM_ err) to the pure RM_ HVC as shown in the lower panels of the box in the middle of Fig. <ref>. For Mock-NVSS samplings, we randomly draw RM_ err values from the observed error distribution of the NVSS catalogue and multiply by 1.22 (). For Mock-POSSUM and Mock-SKA, we construct the expected RM_ err distribution of each dataset based on the following procedures: first, we utilize the polarised source count distribution observed by <cit.> to obtain the polarised signal-to-noise (S/N) distribution of observable radio sources (S/N6). Then we plug in the S/N values to equation <ref> and get the |RM_ err| distribution for Mock-POSSUM and Mock-SKA separately. For simplicity, we assume that the polarised source count does not change significantly between the frequency range that <cit.> explored and what will be covered by POSSUM and SKA1-mid survey. We define the “detection” of a magnetised cloud when the distribution of RM_ HVC+RM_ err is statistically different from the RM_ err distribution. We use the two-sample Kolmogorov-Smirnov (KS) test to quantify the difference between the two distributions. The detection rate of a cloud is obtained by drawing different sets of RM_ err from the RM_ err distribution a large number of times (N=10^3). For each draw, we calculate the p-value of the KS test, i.e., the degree of the difference between the RM_ err distribution and the RM_ HVC+RM_ err distribution. The detection rate is defined as the probability that the p-value is less than 0.0027 which corresponds to the probability of the two distributions being different with larger than 3 σ confidence; Detection rate≡N(p-value<0.0027)/10^3. The histograms in the bottom box of Fig. <ref> show the distribution of p-values for each sampling experiment for the example cloud. The detectability of this cloud for example is 0% for the Mock-NVSS, 79.9% for the Mock-POSSUM, and 100% for the Mock-SKA sampling. In Table <ref>, we provide the total detection rates of all clouds identified in the simulations (column 3), clouds within the distance of 30 kpc (column 4), and clouds above the HI mass of 10^6 M_⊙, i.e., around the mass of the Smith Cloud or higher (column 5) for each sampling. Indeed, the detectability of magnetised clouds increases significantly from 4.0% (mock-NVSS) to 37.4% (mock-POSSUM) and 51.4% (mock-SKA) with the improved source density (from 1 deg^-2 to 25 deg^-2 and 60 deg^-2) and the characteristic RM precision (12.9 rad m^-2 to 2.1 rad m^-2 and 1.9 rad m^-2). Comparing columns 3, 4, and 5, we find that the detection rate strongly depends on the distance to the clouds as well as the cloud mass. Magnetised clouds closer to an observer and/or more massive have higher chances of detection at a given RM sampling specification. In Fig. <ref>, we present the detection rate of clouds as a function of the distance and the HI mass of the clouds. We take the distance between the observer and the clouds' centre of mass as a representative distance. The cloud mass presented here is the HI mass, therefore, they are always smaller than the total gas mass and depend on the HI fraction within a cloud. Therefore, low-mass clouds in this figure (HI mass ∼10^4 M_⊙) do not conflict with the gas mass resolution of the simulations (m_ gas=8.5×10^4 M_⊙, see Table <ref>). We discuss the resolution test in Appendix <ref>. The colour of the hexagon bins shows the average detection rate of clouds in each bin. Bins containing less than three clouds at the given parameter range are shown as open hexagons. Results from the Mock-NVSS sampling (left panel) show that the detection rate of magnetised clouds is strictly limited to massive and closeby clouds. The yellow star in this panel is where the Smith Cloud, the only observationally identified magnetised HVC candidate unrelated to the Magellanic System, is located on this grid for reference (HI mass∼ 10^6.5 M_⊙ and distance ∼12.4 kpc, ; ). The detectability estimated by the simulations at this parameter regime is fairly high (≈60%), indicating that the detection of magnetic fields associated with the Smith-Cloud-like population is not uncommon. Both Mock-POSSUM and Mock-SKA sampling results present significantly increased detection rates at all mass and distance ranges, suggesting the detections are feasible even for distant and low-mass clouds. We first focus on clouds with HI masses above >10^7 M_⊙. Although not many clouds are located at this mass range (most of the hexagon bins are unfilled, i.e., enclose fewer than three clouds), they are almost always detectable with mock-POSSUM and mock-SKA samplings. Clouds in this regime are mostly satellite galaxies or extended outer disk structures of the central galaxy. At the lower mass range (HI mass <10^7 M_⊙), we find a strong mass and distance dependency of the detection rate. This trend stems from both the observational and physical nature of the clouds. In the left panel of Fig. <ref>, we show the mean sky coverage of clouds in the same HI mass – distance plane. Simply reflecting the inverse-square law of the solid angle, the larger the distance to a cloud the smaller it appears in the sky projection. Large angular size is favourable for the detection of magnetised clouds as it means that a cloud is sampled with a large number of background sources at a fixed polarised source density. On the other hand, the right panel of Fig. <ref> shows the mean magnetic field strength of the clouds. The stronger magnetic field should increase the RM contribution of the cloud, making the ensemble average of on-cloud RMs higher. We find a clear trend that clouds closer to an observer have stronger magnetic fields compared to the ones at large distances. A similar result is reported by <cit.> where the authors show cold clouds in the inner halo are dominated by magnetic pressure over thermal pressure in comparison to the clouds in the outer halo. We speculate this trend is a combined result of (i) the strength of the ambient halo magnetic fields being stronger at the inner halo and (ii) the presence of clouds originating from the strongly magnetised galactic ISM environment in the disk-halo interface. In summary, we learn the following from our synthetic RM sampling experiment on circumgalactic HI clouds in IllustrisTNG50: * With specifications of currently existing polarimetry surveys (e.g., NVSS), it is not unexpected that the detection of magnetised clouds has been only a handful and limited to clouds that are nearby (the Smith Cloud, ) or associated with the Magellanic System (the Large Magellanic Cloud; , the Small Magellanic Cloud; ; , and the Magellanic Bridge; ). * Polarimetric surveys conducted with upcoming radio telescopes (e.g., ASKAP and SKA) will provide improved polarised source density and RM measurement accuracy that can significantly increase the overall detection rate of magnetised clouds. * At a given RM grid sampling specification, the detection rate is higher for clouds that are more massive and/or closer to the observer. This is not only because of their larger sky coverage but also because they have stronger magnetic fields. §.§ Higher-order statistical tracers of magnetised clouds Identifying the RM “excess” associated with cloud distribution in the sky has been a widely used method to search for magnetised clouds using RM grids. The excessive RM, i.e., a larger mean |RM|, is almost certainly a good tracer of overall enhancement of magnetic field strength in and around clouds (higher B_∥ in equation <ref>). However, considering the RM is an observable parameter integrated along a sightline, the presence of magnetised clouds may not always produce a larger mean RM in the region. For example, a complex 3D magnetic field geometry in a turbulent medium could cancel out locally enhanced RM when integrated along a sightline (see point iii in Section <ref> as well as Figure 1 of ). In such cases, the imprint of a magnetised cloud would rather show up in higher-order statistics that trace fluctuations of RM at scales smaller than the scales probed by the RM grid. In this section, we examine whether higher-order statistics of RM, namely standard deviation, skewness, and kurtosis, are useful tracers of magnetised clouds. In Fig. <ref>, we show histograms of each statistic calculated for each cloud in the simulations (top left: mean, top right: standard deviation, bottom left: skewness, bottom right: kurtosis). All parameters are measured within all-sky-projected rectangular regions that tightly enclose HI clouds. In all panels, the colour of the histograms shows the signs of the statistics (red: positive, blue: negative). Furthermore, we separate the cloud sample into the detection and the non-detection groups in order to identify parameters that demonstrate a clear discrepancy between the two groups and consider them useful measures for finding magnetised clouds in RM grids. The solid lines are for clouds with a non-zero detection rate according to the mock-SKA sampling and the dashed line histograms are for non-detections for comparison. Note that our definition of the detection depends on the type of sampling we use (mock-NVSS, mock-POSSUM, and mock-SKA). The change in the sampling type alters the number of clouds in the detection and the non-detection group, but we confirm that the distribution of each statistic does not depend strongly on the sampling type we use. We start from the top left panel of Fig. <ref>. The absolute mean RM of the detection group (solid line) is overall higher than the non-detection group (dashed line), demonstrating that the RM excess in general serves as a useful tracer of magnetised clouds. Note that the measurement noise (RM_ err) distribution we insert to the pure RM_ HVC distribution has the mean value of zero. Thus, when there is no systematic contribution of a cloud, the expected mean RM value is zero. We do not see differences between the distributions of positive (red) and negative (blue) mean RM values. This indicates that there is no preferred line-of-sight direction of mean magnetic fields in the clouds. The top right panel of Fig. <ref> shows the distribution of RM standard deviation. The standard deviation is by definition always positive. The distribution of the non-detection group (dashed line) peaks at ∼ 2.2 rad m^-2, which simply reflects the standard deviation of the RM_ err distribution that we insert to the RM_ HVC of the mock-SKA sampling. The detection group (solid line) peaks at a slightly higher standard deviation and there is a long tail extended towards higher values. In this panel, we only show the distribution between 2.2 and 3 rad m^-2, but it is worth noting that 20% of the clouds in the detection group have the RM standard deviation higher than 3 rad m^-2. We present the skewness distribution in the bottom left panel of Fig. <ref>. The skewness parameterizes the asymmetry of the distribution. Zero skewness means the distribution is symmetric about the median. A positively skewed distribution has a tail extended towards the higher value (positive RM in our case) and a negatively skewed distribution is extended towards the opposite side (negative RM). The distribution of RM around individual clouds is more skewed among the detection group (solid line) compared to the non-detection group (dashed line). We do not find differences in the distribution of absolute skewness between negatively (blue) and positively (red) skewed populations. It is worth mentioning that among the clouds in the simulations, 85% have the same signs of the mean and the skewness. Finally, the bottom right panel of Fig. <ref> shows the kurtosis distribution. The kurtosis describes how extended the tails of a distribution are compared to the normal distribution. We adopt Fisher’s definition of excess kurtosis: if positive, the distribution approaches zero at both ends more slowly than a Gaussian and if negative, the tails fall faster than a Gaussian. The majority of the clouds have negative kurtosis (blue) strongly peaked at the absolute value of ≈1, and the bias toward negative kurtosis is stronger for the non-detection group (dashed line, 99%) than for the detection group (solid line, 74%). In comparison, the distribution of positive kurtosis values (red) is widely spread over several orders of magnitudes. We confirm that most clouds with large positive kurtosis are massive clouds (HI mass 10^7 M_⊙). Overall, the difference in kurtosis between the detection and non-detection groups is subtle compared to other statistics inspected in this work. From our analysis in this section, we conclude that magnetised circumgalactic clouds leave imprints on the mean, standard deviation, and skewness of the RM distribution, but not so much in kurtosis. The imprints in multiple RM statistics can be used as strong evidence of magnetised clouds in the absence of a strong detection in mean RM. One caveat is the correction of the foreground Galactic ISM which can potentially make a significant contribution to the observed RM statistics. As mentioned earlier in Section <ref>, characterising the Galactic foreground is beyond the scope of this paper and we have assumed that the foreground has been perfectly removed. While the foreground attributes of the observed Milky Way circumgalactic clouds must be taken care of on case-by-case bases (e.g., ), constraints on the expected RM statistics of the foreground can be acquired from a better characterization of the turbulent properties of the ISM. This is a field of active ongoing investigations and has been defined as one of the main scientific goals of future radio polarimetric surveys (). § CONTRIBUTION OF THE CGM TO THE ALL-SKY RM DISTRIBUTION In this section, we now broaden our focus to the contribution of the entire CGM to the all-sky RM distribution, not limited to regions enclosing HI clouds. In the following paragraphs, we provide further motivation to do so. The majority of polarised point-like radio sources providing RM measurements are extragalactic objects. Therefore, any observed RM measurements are a superposition of Faraday rotation taking place at any magneto-ionized structures between an observer and a source. For example: RM_ obs = RM_ int + RM_ ex-gal + RM_ MW, CGM + RM_ MW, ISM + RM_ err. Each term of the above equation portrays Faraday rotation (i) intrinsic to the media within the vicinity of the source itself (RM_ int), (ii) at extragalactic structures like intervening galaxies and the large-scale structure (RM_ ex-gal), (iii) at the Milky Way CGM (RM_ MW, CGM), (iv) at the Milky Way ISM (RM_ MW, ISM), and (v) added to the signal due to the instrumental noise (RM_ err). Similarly, the RM variance adds up assuming each RM component is statistically independent and follows the Gaussian distribution: σ_ obs^2 = σ_ int^2 + σ_ ex-gal^2 + σ_ MW, CGM^2 + σ_ MW, ISM^2 + σ_ err^2. Decomposing the contributions of individual components empirically from the observed RM distributions is challenging, but there have been attempts to do so. <cit.> fit a Galactic latitude-dependent model to the RM dispersion measurements using the NVSS RM catalogue and separate latitude-dependent and latitude-independent components of the observed RM spread. The estimated σ_ RM values are ≈7.6 rad m^-2 for the latitude-dependent component and ≈6.2 rad m^-2 for the latitude-independent component. The author refers to the former as a Galactic contribution and the latter as an extragalactic contribution to the RM dispersion with the caveat that the Milky Way can have latitude-independent components. <cit.> report the extragalactic RM dispersion of similar extent between 6.6-7.2 rad m^-2. Measuring the difference in RM between close pairs of radio sources provides independent estimations of the combined contribution of the intrinsic and extragalactic RM variations. This approach eliminates the Galactic contribution by assuming that two sources with small angular separation share the almost identical Galactic foreground Faraday screen. <cit.> use the NVSS RM catalogue and obtain the upper limit of the variation in RM. One of the key findings of their work is that physically related pairs, e.g., two AGN lobes of one radio galaxy, have smaller Δ RM (≈ 4.6 rad m^-2) than random associations (≈14.9 rad m^-2). This finding reinforces the explanation that the observed Δ RM originate from the extragalactic contribution. Similarly, <cit.> use the LOFAR Two-Metre Sky Survey (LoTSS) and find Δ RM ≈ 1.4 - 1.8 rad m^-2 between close radio pairs (see also ). The authors attribute the small Δ RM to the low-frequency range (144 MHz) of the LoTSS data, where sources experiencing strong Faraday rotation suffer depolarisation. They argue such depolarisation effect in fact to some degree filters out radio sources influenced by unusually strong Faraday rotation intrinsic to the sources themselves, therefore, their measurements potentially better reflect low variance components of RM such as the cosmic web (). In regard to the RM dispersion intrinsic to radio sources (σ_ int), there are indications that RM_ int can vary significantly source-by-source, over almost two orders of magnitude, and may systematically depend on source properties (). From the numerical simulations' perspective, there have been efforts to estimate the contribution of the IGM and large-scale cosmic filaments to the observed RM. For example, <cit.> use IllustrisTNG100 to show that the integrated Faraday rotation within large-scale magnetised feedback bubbles in the intergalactic space can be as high as a few μ G. <cit.> estimate the contribution of the extragalactic large-scale structures to the observed RM is ∼ 7-8 rad m^-2 (see also ) which is comparable to the latitude-independent component of the observed RM spread estimated by <cit.>. In their paper, the authors placed a mock observer within a Local Group-like environment which in part incorporates the contribution of the local halo environment. Yet, the primary focus of the simulations they utilize in their work is to reproduce realistic cosmic large-scale structures (the spatial resolution =195h^-1 kpc) and thus might not sufficiently reflect fluctuations in the local CGM which take place at a much smaller physical scale. In any case, it is important to note that predictions from numerical simulations, including our own, have to be interpreted with caution as the exact extent of magnetic fields in cosmic structures and the estimated Faraday rotation may depend on how numerical simulations treat magnetic field seeding and amplification (). In this section, we will demonstrate that the Galactic CGM can contribute significantly to the observed RM dispersion. We raise caution that the latitude-independent σ_ RM estimated from observations is not necessarily dominated by signals from the extragalactic environment (e.g., σ_ int and σ_ ex-gal in equation <ref>), but potentially influenced by the spread in RM caused by the Galactic CGM (σ_ MW, CGM). Although IllustrisTNG is a cosmological suite that allows the study of cosmic large-scale structures and the Faraday rotation associated with them, we postpone this to future studies. As mentioned throughout this paper, we focus instead on the CGM of a galaxy surrounding an observer at redshift 0. Accurate estimation of the contribution of large-scale structures on the observed RM requires consideration of various factors that are beyond the scope of this paper, such as (i) the evolution of magneto-ionic properties of the large-scale structure over a wide range of redshifts, (ii) the redshift distribution of polarised radio sources, and (iii) the scale factor dependency ((1+z)^-2) of the RM of polarised sources at cosmological distances (see, e.g., ). §.§ Measuring the variation in RM caused by the CGM In this paper, we calculate the spread in the all-sky RM distribution caused by the CGM (σ_ CGM) in two different ways, motivated by <cit.>. In both cases, we use the all-sky RM grid of the CGM uniformly sampled with the resolution of (Δ l, Δ b) = (0.1^∘, 0.1^∘) and Δ d=50 pc. No measurement noise (RM_ err) is added to this sample as we are interested in the RM spread purely produced by the CGM in this analysis. * The uncorrected standard deviation (hereafter, σ_ CGM, un-corr) is a standard deviation of the all-sky RM_ CGM distribution. Note that we have filtered out the galactic ISM contribution from the simulated galaxies (Section <ref>). Therefore, we do not expect any latitude-dependent component of σ_ CGM from our results. Also, we exclude any RM_ CGM samples from the regions where the CGM HI column density is higher than >10^19 cm^-2 in order to avoid the contribution of obvious dense structures in the CGM localized to a certain region of the sky, such as satellite galaxies. * The corrected RM standard deviation (hereafter, σ_ CGM, corr) is the value we have calculated following the correction for longitude dependencies described in Section 3 of <cit.>. We provide a brief summary of the correction method here. The motivation behind presenting the corrected σ_ CGM in this study is to ensure that we calculate the RM variance of the simulated sky as closely as possible to how it is calculated from observations. We bin the RM grids along the galactic latitude: two polar cap regions above and below b=± 78^∘ and 39 bands between -78^∘<b<78^∘ with the width of Δ b = 4^∘. Next, we further divide each cap/band along the galactic longitude and calculate the average RM within each cell. The cell size along the longitude is set to Δ l = 20^∘ for the polar caps and Δ l = 5^∘/cos(b) for the bands. The longitude-dependency of RM in each cap/band is determined using the cubic spline of the cell-averaged RM and then subtracted from the original RM grids. From this longitude-corrected all-sky RM distribution, we calculate the standard deviation of the RM for 2^∘ bins in Galactic latitude and identify the representative σ_ CGM that minimizes the χ^2 of the 90 standard deviation measurements. Figs. <ref>, <ref>, and <ref> consist of panels for each of the 56 Milky Way-like galaxies in our sample showing the all-sky RM_ CGM distribution in the Mollweide projection (left) and σ_ CGM as a function of the galactic latitude (right). As we have excluded the galactic ISM from our analysis to focus on the CGM, no obvious galactic disk is visible in the maps. The grey-shaded regions in the left panels are where the HI column density of the CGM is higher than >10^19 cm^-2, i.e., areas excluded when calculating σ_ CGM, un-corr and σ_ CGM, corr. In the right panels, there are two solid lines showing the latitude profiles of the raw σ_ CGM (grey; no high HI column density filtering and longitude correction applied) and the corrected σ_ CGM (red), respectively. Two vertical dashed lines show σ_ CGM, un-corr (blue) and σ_ CGM, corr (red). There is no overall latitude-dependency of the σ_ CGM since we have removed the galactic contribution. A lot of the excess of σ_ CGM in the raw σ_ CGM profiles are associated with the locations of the high-column density regions of the sky, therefore, applying the HI column density filter effectively removes high σ_ CGM peaks visible in the grey line, although some spikes are still present in the corrected profiles (red line). Such localized high RM variations in the corrected profiles are mitigated when taking the best-fitting value, σ_ CGM, corr, of all the latitudes (red vertical dashed line). §.§ Comparison with observations We now compare σ_ CGM of the simulated galaxies with the observed value estimated by <cit.>. The observed RM spread we are comparing our results to is the latitude-independent component of the RM variance which possibly encompasses the combined contribution of σ_ int, σ_ ex-gal, and σ_ MW, CGM terms in equation <ref>, whereas, from simulations, we are measuring the pure CGM contribution (σ_ CGM). Therefore, the measurement from the observations should be taken as an upper limit of the RM spread produced by the Milky Way CGM. Even though we have attempted to select Milky Way-like galaxies in simulations using the criteria explained in Section <ref>, σ_ CGM varies by a lot among the sample. In order to understand the galaxy-by-galaxy variation of σ_ CGM, we investigate possible scaling relations between σ_ CGM and various galaxy properties. In Fig. <ref>, we present σ_ CGM as a function of four parameters, all of which are well-constrained for both the Milky Way (red cross symbol) and the simulated galaxy sample. The filled black circles are σ_ CGM, corr and open circles are σ_ CGM, un-corr for comparison. The blue points are the measurements from TNG50-2, the lower-resolution simulation, that we will discuss in Appendix <ref> where we perform the resolution test. Most notably, there is a large galaxy-by-galaxy variation in σ_ CGM, spanning almost two orders of magnitudes. The longitude correction for σ_ CGM (filled circles compared to open circles) does reduce the σ_ CGM of individual measurements, but it does not mitigate the spread among the galaxies. Below, we discuss what we find in each panel of Fig. <ref> in more detail. * Distance to the most massive satellite (top left panel): The Milky Way is experiencing an ongoing accretion of the Magellanic System, which indeed appears to be leaving imprints in the observed RM sky, at the very least locally where the gas column density is high (; ; ; ). The distance to the Large Magellanic Clouds is ≈50 kpc (). In order to examine whether a close-by companion galaxy contributes to the spread in the all-sky RM distribution, we identify the most massive satellite galaxy within a halo and measure the distance to the galaxy. The stellar mass of the satellites varies between 10^6-10^10 M_⊙. Not surprisingly, the RM measurements along sightlines through satellite galaxies are significantly higher than other regions of the sky due to the increased gas density and magnetic field strength. However, after masking out localized high HI column density on-satellite regions, we do not find a correlation between σ_ CGM and the distance to the satellite. We further examine the distribution of galaxies whose primary satellite is a gas-rich galaxy (magenta square symbols), i.e., the total gas fraction is higher than 0.05, but there is no correlation between the satellites' gas fraction and σ_ CGM. * Specific star formation rate (top right panel): The specific star formation rate (sSFR) is defined as the total star formation rate of a galaxy divided by the stellar mass. The observed sSFR of the Milky Way is ≈ 2.7×10^-11 yr^-1 (). In simulations, we measure both the star formation rate and the stellar mass within the aperture of 30 kpc from the galactic centre. As our definition of Milky Way-like halos takes the star formation rate as one of the criteria, our sample does not span a wide range of sSFR. Even so, there is a clear positive scaling relation between σ_ CGM and sSFR within the sSFR range covered by our galaxy sample, though with a large scatter. We consider this as an indication of a link between the sSFR and magnetic properties of the CGM. This interpretation is in line with <cit.> where the authors show that a galaxy and its outflows are coupled to the magnetic field in the CGM and vice versa. * HI CGM covering fraction (bottom left panel): In order to quantify the distribution of cold clouds in the CGM using an observable parameter, we measure the sky coverage of HI circumgalactic clouds with the column density limit of >10^18 cm^-2. The covering fraction of the Milky Way HVCs is 20%, calculated from the HI HVC map of <cit.>. At this range of the covering fraction, simulated galaxies span a very wide range of σ_ CGM ranging almost two orders of magnitudes. The spread in σ_ CGM decreases going towards higher HI covering fractions. We find a positive correlation between σ_ CGM of simulated galaxies and the sky coverage calculated with an extremely low HI column density limit, say, >10^14 cm^-2, however, we do not present the result here as there are no observations sensitive to detect such a low column density HI CGM. * The supermassive black hole accretion rate (bottom right panel): <cit.> estimate the upper limit of the gas accretion rate of Sagittarius A*, the supermassive black hole (SMBH) of the Milky Way, is 8×10^-5 M_⊙ yr^-1. There are studies demonstrating that, at least in the IllustrisTNG suite, the SMBH feedback directly affects the gas composition and flow in the CGM of Milky Way-like galaxies (e.g., ; ). We find σ_ CGM overall scales with the SMBH accretion rate, though the scatter is large at lower accretion rates (<10^-5 M_⊙yr^-1). The IllustrisTNG suite models the blackhole feedback in two modes depending on the SMBH mass and the accretion rate (): kinetic mode and thermal mode. We show galaxies under thermal mode feedback with green-coloured square symbols but do not find any appreciable differences in trends between the two groups. We conclude that there is no single parameter that alone can explain the wide range of σ_ CGM thus the all-sky RM fluctuations arise as a result of diverse processes related to the evolution of the CGM. In all cases, the estimate from the Milky Way observations (red cross symbol) is located well within the scatter of the simulated galaxies. It is important to make it clear that we do not attempt to estimate the exact contribution of the CGM to the observed Milky Way RM variance. Instead, our result is a demonstration that in some galaxies similar to the Milky Way, Faraday rotations occurring within the CGM alone can produce RM dispersion similar to or even higher than observational estimates of all latitude-independent contributions combined. Therefore, it is possible that the RM dispersion that is intrinsic to radio sources themselves or coming from the IGM is smaller than previously considered. Ongoing investigations to independently constrain intrinsic and extragalactic RM variations using current and future polarimetric observations will help disentangle this complication. § SUMMARY In this paper, we have explored the synthetic Faraday rotation sky of Milky Way-like galaxies in IllustrisTNG50. We specifically focus on the Faraday rotation at the CGM of the galaxies and estimate its contribution to the observed RM distribution. Here, we summarize the main points discussed in this paper. First, we evaluate the detectability of individual magnetised HI clouds in the CGM by quantifying whether the RM signal produced by the clouds is statistically distinguishable from the RM measurement error. In this synthetic RM sampling experiment, the main factors we consider are the polarised source density and the RM measurement accuracy. We construct three different RM grid samplings, namely mock-NVSS, mock-POSSUM, and mock-SKA by adopting the specifications of current and forthcoming polarimetric surveys. The currently available NVSS RM catalogue provides about one RM measurement per square degree with the precision of ≈ 12.9 rad m^-2. We expect significant improvements in both parameters in upcoming surveys, for example, POSSUM (25 sources per square degree with ≈ 2.1 rad m^-2 precision) and SKA1-mid survey (60 sources per square degree with ≈ 1.9 rad m^-2 precision). The mock-NVSS sampling broadly reproduces the current status of the search for magnetised clouds using existing polarimetric observations, including NVSS; the detection is limited to nearby massive clouds (e.g. the Smith Cloud) and objects associated with infalling satellite galaxies (e.g. the Magellanic System). From our mock-POSSUM and mock-SKA sampling results, we predict a significant increase in the number of detections using upcoming surveys which will allow systematic studies of magnetised circumgalactic clouds. In all cases, the detection rate is particularly high for clouds that are close and massive. This trend results from both the increased angular sky coverage and the stronger magnetic fields associated with these clouds. Quantitatively, we expect about an order of magnitude increase in the detection rate of magnetised clouds with POSSUM and SKA1-mid survey compared to NVSS. Simply scaling by this factor, we expect the number of observational confirmations of magnetised circumgalactic clouds to increase from about 4 (the Smith cloud and structures associated with the Magellanic System) to almost 40 with POSSUM and 50 with the SKA1-mid survey. Not only the upcoming surveys will find many magnetised clouds, but also their significantly improved polarised source density will open new opportunities to study the magnetic field structures of the clouds in great detail. Although in this paper we primarily focus on synthesising NVSS, POSSUM, and SKA1-mid survey, our speculations about the improved power of upcoming surveys are certainly applicable to other polarimetric surveys that are already available or that will be coming very shortly. For example, The Rapid ASKAP Continuum Survey (RACS, ) has recently delivered the Spectra and Polarisation In Cutouts of Extragalactic Sources (SPICE-RACS, Thomson et al. submitted) RM catalogue that has the polarised source density of ≈4 deg^-2 and the average RM accuracy of 1.6 rad m^-2 above polarised S/N>8. Also, it is worth mentioning that both ASKAP and SKA are located in the southern hemisphere, therefore, surveys planned with telescopes in the northern hemisphere, e.g., the Karl G. Jansky Very Large Array Sky Survey (VLASS, ), will have a significant contribution in accessing the CGM towards the northern sky. We further perform an evaluation of various statistics of the RM distribution as a tracer of magnetised clouds. Traditionally, the search for magnetised clouds using RM grids has mainly focused on identifying excessive RM, which usually refers to the enhanced magnitude of RM measurements among sightlines that point towards a cloud of interest. However, we suggest that future RM surveys with high source densities will be able to discover magnetised clouds using higher-order statistics, especially the standard deviation and skewness of the RM distribution as long as correction for the Milky Way ISM foreground can be done with reasonably high accuracy. Finally, we study the degree of fluctuations in the all-sky RM distribution produced by the CGM. The observed spread in the RM distribution is an aggregation of any fluctuation introduced by Faraday rotating media between polarised sources and the observer, including the IGM, the Milky Way CGM, and the Milky Way ISM. The degree of importance of each individual component is difficult to estimate from the observations. By quantifying the RM variation produced solely by the CGM of the simulated Milky Way-like galaxies, we address the question of how much of the Galactic latitude-independent RM variance can be attributed to galactic/extragalactic components. The simulated galaxies, even though we try to select Milky Way-like galaxies, show a wide spread of the all-sky RM standard deviation ranging two orders of magnitudes in rad m^-2 unit. In view of the observationally demonstrated utility of Faraday rotation to measure magnetic fields in diverse extragalactic environments, we must count ourselves lucky that the Milky Way is not far more shrouded by a complicated CGM environment. We investigate the relationship between various global galaxy properties and the RM standard deviation, but we do not find a single galactic property that can explain the diversity. Instead, the RM variation in the CGM appears to be a combined result of various astrophysical processes governing the galaxy's evolution. One possible analysis we suggest for future studies is to trace how the observed RM fluctuations change over time in each galaxy and connect it to galactic processes. The observed latitude-independent RM standard deviation reported by <cit.> falls well within the scatter in the distribution of the simulated galaxies. Considering that the observed value reflects the combined contribution of the Milky Way CGM and extragalactic structures, we cannot reject the possibility of the Milky Way CGM contributing significantly to the observed RM spread. In other words, the extragalactic/intrinsic RM variation may be smaller than what has been previously thought. § DATA AVAILABILITY The IllustrisTNG simulations are publicly available at <www.tng-project.org/data>. The data directly related to this paper will be shared on reasonable request to the corresponding author. § ACKNOWLEDGEMENTS We thank the IllustrisTNG collaboration for making the data publicly available. Our analysis is performed using the Python programming language (Python Software Foundation, <https://www.python.org>). The following packages were used throughout the analysis: numpy (), SciPy (), and matplotlib (). This research also made use of a publicly available pyfof package (<https://pypi.org/project/pyfof/>). NMMcG acknowledges funding from the Australian Research Council in the form of DP190101571 and FL210100039. SLJ, NMMcG, YKM, CLVE, and CSA acknowledge the Ngunnawal and Ngambri people as the traditional owners and ongoing custodians of the land on which the Research School of Astronomy & Astrophysics is sited at Mt Stromlo. mnras § RESOLUTION TEST The IllustrisTNG50 suite consists of three simulations in identical settings and different resolutions. As for a resolution test of the results we present in this paper, we compare the fiducial simulation (TNG50-1) to the second lower-resolution simulation (TNG50-2). We explain the choice of parameters for each simulation in Table <ref> and the related text in Section <ref>. The last two columns of Table <ref> are the number of Milky Way-like galaxies and the number of circumgalactic clouds used for this study. There are clearly fewer clouds identified in TNG50-2 (in total 2052) than in TNG50-1 (in total 5218), even though there are more Milky Way-like galaxies in TNG50-2 (in total 66) than in TNG50-1 (in total 56). This is because lower-mass, smaller clouds are likely to suffer from the insufficient resolution of TNG50-2 (see also ). Also, substructures of one cloud complex identified as multiple individual clouds in TNG50-1 are potentially merged into one large cloud in TNG50-2. We show the HI mass distribution of clouds in the upper panel of Fig. <ref>. Indeed, in TNG50-2 (blue), there is a higher fraction of massive clouds (∼ 10^7-9 M_⊙) in comparison to TNG50-1 (black). We also find a steep decline in the number of clouds in TNG50-2 at the low mass range (∼ 10^4-5 M_⊙) which is not as severe in TNG50-1. Now, we compare a number of cloud properties directly related to the key conclusions of this paper. Earlier in Section <ref>, we demonstrate that clouds closer to an observer in general have stronger mean magnetic field strengths (see Fig. <ref>). And that, along with an increased sky coverage, is one of the factors that makes nearby clouds more detectable in RM grids at a given RM sampling specification. In the lower panel of Fig. <ref>, we again present the mean magnetic field strength of clouds versus the distance, now comparing clouds in TNG50-1 (black) and TNG50-2 (blue). Each data point corresponds to a single circumgalactic cloud identified in each simulation. The solid line is the median profile and the shaded region shows the 1σ scatter (encloses 68% of the data points). We find that the clouds identified in TNG50-1 and those identified in TNG50-2 span the same range of mean magnetic field strengths and follow the same profile of decreasing magnetic field strength with increasing distance. Earlier in Fig. <ref>, we have presented all-sky σ_ RM as a function of galaxy global properties. Focusing on comparing the results from TNG50-1 (black circles) and TNG50-2 (blue circles), we do not find a meaningful difference in the distribution of galaxies. In TNG50-2, there are a larger number of galaxies with higher σ_ RM (∼10^2 rad m^-2), but this is because of their higher SMBH accretion rate (∼ 10^-3 M_⊙ yr^-1, see bottom right panel) and is within the scaling relation also present in TNG50-1. From the resolution test we present here, we conclude that the major results of this paper are not sensitive to the resolution.
http://arxiv.org/abs/2307.04797v1
20230710180007
The Characteristic Shape of Damping Wings During Reionization
[ "Huanqing Chen" ]
astro-ph.CO
[ "astro-ph.CO", "astro-ph.GA" ]
firstpage–lastpage -3cm14pt P3H-23-043, TTP23-024, ZU-TH 34/23 1.5cm Towards gg→ HH at next-to-next-to-leading order: light-fermionic three-loop corrections Joshua Davies^a, Kay Schönwald^b, Matthias Steinhauser^c (a) Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK (b) Physik-Institut, Universität Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland (c) Institut für Theoretische Teilchenphysik, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede Straße 1, 76128 Karlsruhe, Germany ============================================================================================================================================================================================================================================================================================================================================================================================================================ Spectroscopic analysis of Lyα damping wings of bright sources at z>6 is a promising way to measure the reionization history of the universe. However, the theoretical interpretation of the damping wings is challenging due to the inhomogeneous nature of the reionization process and the proximity effect of bright sources. In this Letter, we analyze the damping wings arising from the neutral patches in the radiative transfer cosmological simulation suite Cosmic Reionization on Computers (CROC). We find that the damping wing profile remains a tight function of volume-weighted neutral fraction , especially when >0.5, despite the patchy nature of reionization and the proximity effect. This small scatter indicates that with a well-measured damping wing profile, we could constrain the volume-weighted neutral fraction as precise as Δ≲ 0.1 in the first half of reionization. reionization – intergalactic medium – quasars: absorption lines § INTRODUCTION The epoch of reionization (EoR) brought about a major change to the global properties of the intergalactic medium (IGM) within the first billion years of the universe. Thanks to the numerous data obtained by JWST, we are now in an excellent position to understand this frontier in astrophysics. One of the fundamental questions surrounding the EoR concerns the timing and duration of reionization, which is not yet well-constrained. Several methods have been employed to measure reionization, each with its own unique strengths and limitations. One of the earliest constraints came from cosmic microwave background experiments that utilized the Thompson scattering effects of free electrons. For example, <cit.> has constrained the midpoint of reionization redshift to be z_ mid=7.82 ± 0.71 <cit.>. However, accurately measuring the entire history of reionization from start to end using Thompson scattering effect on CMB alone is challenging due to its integrated nature. Another powerful method to characterize the full cosmic dawn and reionization history is through the 21cm line emission from neutral hydrogen. However, at such long wavelengths, the foreground is orders of magnitude brighter than the signals, making data reduction notoriously difficult <cit.>. Another alternative of measuring the entire reionization history is to use the absorption in front of bright sources at different redshift during the EoR <cit.>. As a strong resonant line, is sensitive to any trace of neutral hydrogen, allowing us to detect the very end of reionization (neutral fraction ≲ 10^-4) <cit.>. Moreover, when there are neutral patches left in the IGM, the absorption line displace a large damping wing, reaching thousands of km/s in the spectrum where the flux is suppressed. <cit.> shows that assuming a uniform reionization model, the damping wings have characteristic shape and can be used in constrain neutral fraction for bright background sources like gamma-ray bursts (GRBs). However, reionization is a patchy process. The neutral fraction does not drop uniformly everywhere. Rather, some regions become highly ionized first, while other regions, shielded from ionizing sources, remain neutral until much later. In fact, many semi-numerical codes based on excursion-set formalism <cit.> treat every point in the universe as either neutral or ionized, therefore the term neutral fraction is meaningful only when averaged over certain volume or mass. In the literature of reionization, the term neutral fraction most commonly refers to the volume-weighted neutral fraction over the entire universe <cit.>. Given the patchy nature of reionization, one natural question is whether the variance of the damping wing profile is too large to differentiate universes with different , or if it is small enough that the characteristic shape still holds. Another complication arises from the fact that bright sources, such as quasars, which provide high-resolution spectra for analysis, emit a large amount of ionizing radiation themselves. This radiation can alter the local morphology of reionization. Many semi-numerical methods can create a map of ionized bubbles created from typical galaxies <cit.>, but unusual bright sources like quasars are not modeled. Does the removal of neutral patches close to bright sources like quasars significantly change the shape of the damping wing? In this Letter, we use a radiative transfer cosmological simulation suite Cosmic Reionization on Computers (CROC) to address the above questions. Such a study is timely as more and more bright sources at z>6 are spectroscopically followed-up and available to be used in constraining reionization history. The letter does not intend to describe the full process of extracting neutral fraction from data, but serves to estimate the optimal precision of neutral fraction measurement achievable using damping wing. § SIMULATION We use CROC simulations[The cosmological parameters used in CROC are: Ω_b=0.0479, Ω_M=3036, Ω_Λ=0.6964, h=0.6814, n_s=0.9675, σ_8=0.8285, k_ pivot=0.029.] <cit.> to study the damping wings arising from patchily ionized IGM. The CROC project uses the Adaptive Refinement Tree (ART) code <cit.> to reach high spatial resolution (base grid length =39 h^-1  ckpc, peak resolution ∼ 100 pc in physical units). CROC simulations include relevant physics such as gas cooling, heating, star formation, stellar feedback and on-the-fly radiative transfer <cit.>. The main ionization sources in the simulations are star particles which are formed in dense gas in galaxies. In this project, we primarily use the uniform-grid data in one of the 40 runs (CROC B40F) alongside with Rockstar <cit.> halo catalogs to locate dark matter halos. The uniform-grid data contain gas properties of neutral fraction, density, temperature in each base grid cell. They are saved frequently (with increments in expansion factor Δ a=0.001) so that we can sample a large range of and study the entire reionization process. In Figure <ref>, we show the neutral fraction map at three different redshifts overlaid with halos of different masses. § RESULTS To simulate the damping wing profiles, we first draw skewers (sightlines) starting from massive halos. We locate halos from Rockstar halo catalogues in the uniform-grid box. At each redshift, we select the 100 most massive halos and draw 10 skewers of length 200   cMpc/h uniformly distributed in a 3D sphere. In the left panel of Figure <ref>, the brown line shows the neutral fraction along one example skewer drawn from a snapshot where the volume-weighted neutral fraction is =0.5. To study the universe with different neutral fractions , we use skewers drawn at different redshifts of the same simulation run. When calculating absorption, we keep the neutral fraction and temperature of each cell unchanged while scale the physical length and density to a certain redshift z_t by a and a^-3, where a is the expansion factor, respectively. The results shown in this paper are calculated for z_t=6.54. An unusually bright source like a quasar could push the I-front farther away. To mimic such an extra ionizing effect, when calculating the damping wing, we first draw a random number from a uniform distribution between [0, 40] cMpc/h and remove all neutral gas within this distance. This procedure aims to examine the maximum variance of the damping wing shape. Then we convolve the Voigt profiles from the rest of the neutral cells (x_ HI>0.5) along the skewer. In the left panel of Figure <ref>, we show this procedure in a skewer drawn from the box with =0.5: the faint blue, orange and green vertical lines show three random positions within which we remove all neutral gas, and the solid profiles are the damping wing arising from the remaining neutral gas, integrated until 200   cMpc/h. We find that despite the length of the first neutral patch are different, after convolution with all neutral patches behind it, the shapes of the profiles are very similar. This is more evident in the right panel, where we compare these profiles after aligning them at the starting position (the first point where transmission drops to zero). In Figure <ref>, we plot the median of the aligned damping wing profiles in snapshots of different using solid lines, with each colored band showing the 68% scatter. For >=0.5, the scatter of the wing profile is very small despite the patchy nature of reionization, and profiles with Δ =0.25 are clearly separated. We also compare the damping wing profiles with the ones that created without randomly cutting the inner region (dash-dotted lines). If the inner neutral regions are not excised, the median damping wing is slightly stronger, but well within the scatter. The scatter of the no-cut case is almost identical to the previous case and thus not shown. We also calculate the damping wings assuming a uniform density and uniform reionization scenario (every cell has the same neutral fraction x_ HI= and every cell contributes to the damping wing), and the results are shown as dotted lines. Compared with the patchy ionization scenario with inner region excised, the damping wing is in general stronger, especially for 0.5≲≲ 0.75, but the differences are still small compared to the scatter. § DISCUSSION §.§ Cosmic variance Due to the small size (40) of the simulation box, one might question whether the small scatter shown in the last section still holds when considering cosmic variance. To investigate this, we repeat the procedure in another box (CROC B40C). Both simulations have the same physics but different initial conditions (“DC modes”). As a result, B40F reionized the latest (reionization midpoint z_ mid=7.4) while B40C reionized earliest (z_ mid=8.2) in all the six 40 CROC realizations. Therefore, the density environments and halo distribution in these two box should differ maximally in all six realizations, and comparing damping wings in these two boxes helps us understand the stochasticity due to cosmic variance. In Figure <ref>, we compare the damping wings in box B40C with B40F of the previous section. We find that the mean and the scatter are almost identical, suggesting that the damping wings indeed have a characteristic shape as a function of . §.§ Practical use Our simulations show that for a mostly neutral universe ( > 0.5), the scatter in damping wing profiles is small enough to distin- guish between Δ≈ 0.1. However, measuring the entire damping wing profile is complicated in practice. In this subsection we briefly discuss the prospects of using damping wing to constrain . <cit.> originally proposes GRB afterglows as the best candidates for measuring with damping wings. Compared with galaxies or quasars, GRBs have many advantages. They are thought to be produced in normal galaxies and thus live in less biased environments <cit.>. The number of integrated ionizing photons they contribute is also very small and unlikely to enlarge the local ionized bubble. In addition, they are intrinsically bright to be spectroscopically followed-up. One challenge of using GRB afterglows is how to model the Damped absorbers (DLAs) in the host galaxies. <cit.> shows that using the empirical distribution from current GRB afterglow spectra, one could model the local DLA distribution and marginalize this nuance parameter. Although <cit.> does not consider the scatter of damping wing profiles, the small scatter we find in CROC simulations supports their forecast that with ≳ 20 GRB afterglows with spectra resolution R≳ 3000 and signal-to-noise ratio (SNR) ≳ 20, one could reach a precision close to 15% in the first half of reionization. Quasars are the kind of sources we can obtain the highest resolution spectra at z>6. The current highest quality sample of z>6 quasar spectra have SNR≳ 50 and R≳ 10000 <cit.>. Thanks to their high luminosity, the residual neutral fraction in their proximity zone is small enough to allow significant flux on the blue side of line. Such flux offers extra information about the shape of the damping wing. The challenge of using quasars is that by z≈ 7, a quasar may have enlarged the local bubble significantly. Due to the decrease in quasar radiation with distance, the transmitted flux also decreases. This reduction in flux compromises the constraining power on the starting point of the damping wing, which is crucial for anchoring the shape of the damping wing. In the ideal case, we may catch a quasar in its bright phase, where the integrated ionizing photons emitted by the quasar is still small while the instantaneous luminosity is high enough to create a highly transparent proximity zone. This would allow us to observe details of the forest and measure flux close to the starting point of the damping wing, providing greater constraining power for the shape of the entire damping wing. In addition, similar to the GRB afterglow case, we need to develop a better understanding of how to model the intrinsic quasar continua. Since the scatter in damping wing profile is < 10% at wavelength <-1000 km/s from the starting point of the damping wing, it is ideal to have an accuracy in continuum recovery better than 10 % across the quasar emission line (from ≲ -4000 km/s to where no transmitted flux presents). With the successful operation of JWST, we now have the capability to measure spectra from Lyman Break Galaxies (LBGs) or Lyman-alpha Emitters (LAEs) <cit.>. While these sources are more numerous than quasars, their low luminosity limits the achievable spectral resolution. As a result, information about the damping wing is mainly contained in the equivalent width (EW) measurements. However, if we can combine the information of both the LBG/LAE positions and their EWs, it would be promising to constrain the neutral fraction by considering both the damping wing strength and the size of ionized bubbles <cit.>. This avenue will be explored in future work. § CONCLUSIONS In this paper, we analyze the damping wings arisen from the partially ionized IGM in a self-consistent radiative transfer cosmological simulation suite CROC. We find that when the volume-weighted neutral fraction < x_ HI> > 0.5, the shape of the damping wing has a characteristic shape with small scatter (≲ 10%). This scatter remains small even after an unusually bright source (such as a quasar) erodes a significant amount of neutral gas around it. This is because the damping wing arises from the collective, convoluted Voigt profiles along a large distance (hundreds of comoving Mpc). We also calculate the damping wing profiles in a uniform reionization case, and we find that it lies within the 68% scatter. The small scatter in the damping wing profiles indicates that we can expect an accuracy of Δ≈ 0.1 if we could measure the damping wing profile precisely. In reality, there are several complications, notably how to model the intrinsic source spectra and the absorption within the ionized bubble. The profiles we find suggest that in order to achieve the best constraints in neutral fraction, we should aim for an accuracy of continuum fitting better than 10% across the emission line of the source (from ≲ -4000 km/s to where no transmitted flux presents). For a very bright source such as a quasar, the complication of absorption inside the ionized bubble could potentially be mitigated by properly modeling the large-scale structure, which we plan to explore in the future. § ACKNOWLEDGEMENTS HC thanks the support by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference #DIS-2022-568580. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the author. mnras
http://arxiv.org/abs/2307.04193v1
20230709145446
Some new constructions of optimal linear codes and alphabet-optimal $(r,δ)$-locally repairable codes
[ "Jing Qiu", "Fang-Wei Fu" ]
cs.IT
[ "cs.IT", "math.IT" ]
Jing QiuChern Institute of Mathematics and LPMC, Nankai University Tianjin, 300071, P. R. China [email protected] Fang-Wei FuChern Institute of Mathematics and LPMC, Nankai University Tianjin, 300071, P. R. China [email protected] Some new constructions of optimal linear codes and alphabet-optimal (r,δ)-locally repairable codes Jing Qiu Fang-Wei Fu Received: date / Accepted: date ====================================================================================================== In distributed storage systems, locally repairable codes (LRCs) are designed to reduce disk I/O and repair costs by enabling recovery of each code symbol from a small number of other symbols. To handle multiple node failures, (r,δ)-LRCs are introduced to enable local recovery in the event of up to δ-1 failed nodes. Constructing optimal (r,δ)-LRCs has been a significant research topic over the past decade. In <cit.>, Luo et al. proposed a construction of linear codes by using unions of some projective subspaces within a projective space. Several new classes of Griesmer codes and distance-optimal codes were constructed, and some of them were proved to be alphabet-optimal 2-LRCs. In this paper, we first modify the method of constructing linear codes in <cit.> by considering a more general situation of intersecting projective subspaces. This modification enables us to construct good codes with more flexible parameters. Additionally, we present the conditions for the constructed linear codes to qualify as Griesmer codes or achieve distance optimality. Next, we explore the locality of linear codes constructed by eliminating elements from a complete projective space. The novelty of our work lies in establishing the locality as (2,p-2), (2,p-1), or (2,p)-locality, in contrast to the previous literature that only considered 2-locality. Moreover, by combining analysis of code parameters and the C-M like bound for (r,δ)-LRCs, we construct some alphabet-optimal (2,δ)-LRCs which may be either Griesmer codes or not Griesmer codes. Finally, we investigate the availability and alphabet-optimality of (r,δ)-LRCs constructed from our modified framework. § INTRODUCTION Let 𝔽_q be the finite field with q elements and _q^*=_q∖{0}, where q is any prime power. In this paper, we assume that p is an odd prime, m is a positive integer, and define [m]≜{1,2,…,m}. Consider 𝔽_q^m as an m-dimensional vector space over 𝔽_q, and let _q^m*=_q^m∖{0}, where 0 denotes the zero vector. §.§ Griesmer codes A q-ary [n, k, d] linear code 𝒞 is a k-dimensional subspace of 𝔽_q^n with minimum distance d. For a q-ary [n,k,d] linear code, the Griesmer bound is given by<cit.> n ≥∑_i=0^k-1⌈d/q^i⌉, where ⌈·⌉ is the ceiling function. A linear code achieving the Griesmer bound is called a Griesmer code. If there is no linear code with parameters [n, k, d^' > d] for an [n, k, d] linear code 𝒞, we classify 𝒞 as distance-optimal. Solomon and Stiffler<cit.> utilized unions of mutually disjoint projective spaces to propose an infinite family of binary Griesmer codes. More recently, Hyun et al. <cit.> constructed infinite families of binary Griesmer codes by utilizing unions of projective subspaces. This construction was later generalized to the p-ary case by Luo et al. <cit.>. §.§ Locally repairable codes To reduce the repair bandwidth in massive reliable scale distributed storage system, the concept of locally repairable codes (LRCs) <cit.> emerged. The i-th coordinate of an [n, k] linear code 𝒞 is said to have r-locality if the value at this coordinate can be recovered by accessing at most r other coordinates. If all the coordinates have r-locality, we call 𝒞 an r-LRC. However, when multiple node failures occur, the original concept of locality may not work. Prakash et al. <cit.> introduced the concept of (r, δ)-locality of linear codes, where δ≥ 2, which generalized the notion of r-locality. The i-th coordinate of is said to have (r, δ)-locality (δ≥ 2), if there exists a subset S_i ⊂{1, 2, …, n} such that i∈ S_i, |S_i|≤ r+δ-1 and the punctured code |_S_i has the minimum distance d(|_S_i) ≥δ, the set S_i∖{i} is termed the repair set of i-th coordinate. A code is said to have (r, δ)-locality or be an (r,δ)-LRC if all the coordinates of have (r, δ)-locality. Note that (r,δ)-locality reduces to r-locality when δ=2, and we call a code LRC if it has r-locality or (r,δ)-locality. A Singleton-like bound for the minimum distance of an (r, δ)-LRC is given as follows <cit.>: d() ≤ n-k- (⌈k/r⌉ -1 )(δ-1) +1. An (r,δ)-LRC achieving the bound (<ref>) is said to be Singleton-optimal. In the last decade, many constructions of Singleton-optimal LRCs have been proposed, for example see <cit.> . To consider the alphabet size and address practical application needs, Cadambe and Mazumdar <cit.> introduced a new bound called the C-M bound for an [n, k, d] LRC over 𝔽_q with locality r. The C-M bound is given as follows: k≤min_1≤ t ≤⌈k/r⌉-1{tr+k_ opt^(q)(n-t(r+1), d)}, where k_ opt^(q)(n, d) denotes the maximum dimension of a linear code over 𝔽_q of length n and minimum distance d. An r-LRC achieving the bound (<ref>) is said to be alphabet-optimal. The C-M like bound for (r,δ)-LRCs was obtained in <cit.> as follows: k≤min_1≤ t ≤⌈k/r⌉-1{tr+k_ opt^(q)(n-t(r+δ-1), d)}. An (r,δ)-LRC achieving the bound (<ref>) is also said to be alphabet-optimal in the absence of ambiguity. It was demonstrated in <cit.> that binary Simplex codes are alphabet-optimal with locality 2. Several infinite families of alphabet-optimal binary LRCs were proposed in <cit.> by considering the punctured Simplex codes. Some alphabet-optimal binary LRCs constructed from partial spreads were presented in <cit.>. Luo and Cao <cit.> constructed seven infinite families of alphabet-optimal binary LRCs by using a general framework for binary linear codes. For the nonbinary cases, Silberstein and Zeh <cit.> proposed several infinite families of alphabet-optimal p-ary LRCs with locality 2 or 3 by puncturing Simplex codes. Tan et al. <cit.> presented some infinite families of q-ary LRCs achieving the bound (<ref>), by determining the localities of some known linear codes. Very recently, Luo and Ling <cit.> proposed more infinite families of alphabet-optimal LRCs with locality 2 by employing the general framework of constructing p-ary linear codes. Note that the method in <cit.> can be also regarded as puncturing the Simplex codes. In fact, almost all the constructed alphabet-optimal LRCs can be regarded as punctured codes of the Simplex code, and most related papers focus on the r-locality. In <cit.>, Fu et al. provided some Singleton-optimal (r,δ)-LRCs from Simplex code and Cap code, but the dimensions of these codes are limited in {3,4}. In distributed storage systems, to permit access of a coordinate from multiple ways in parallel, LRCs was generalized to LRCs with availability in <cit.> and <cit.>, in which case a coordinate has more than one repair set. For this topic, the readers may refer to <cit.>,<cit.>, <cit.>, <cit.>, <cit.>,<cit.>,<cit.>. §.§ Our contributions and techniques Our contributions can be summarized as follows: (i) We modify the method of constructing linear codes proposed in <cit.> by relaxing the restrictions on projective subspaces. This allows us to obtain some optimal codes with more flexible parameters. (ii) We provide criteria for determining the (2, p-2), (2, p-1), and (2, p)-localities of q-ary linear codes constructed by eliminating elements from a complete projective space. We also propose constructions for p-ary alphabet-optimal (2, p-1), and (2, p)-LRCs. Notably, we prove that p-ary alphabet-optimal 2-LRCs constructed in <cit.> are also alphabet-optimal (2,p-1)-LRCs. Moreover, we point out that the criteria for determining the (r,δ)-localities of p-ary codes can be generalized to determining the (r,δ)-localities of q-ary codes, where q is a prime power. From which, we prove that q-ary Simplex codes are alphabet-optimal (2, q)-LRCs with respect to the bound (<ref>). (iii) We demonstrate that the new linear codes constructed from the modified framework are (r, δ)-LRCs with availability. Although we do not have the exact expression of the alphabet size related bound for (r, δ)-LRCs with availability, we can confirm that some of new constructed codes are alphabet-optimal. Specifically, we propose a sufficient condition for these codes to be alphabet-optimal. From which, infinite families of alphabet-optimal (r,δ)-LRCs with availability can be obtained. To the best of our knowledge, there has been no general construction of alphabet-optimal (r,δ)-LRCs with availability. This paper is organized as follows. Section 2 introduces some basic results that are needed for our discussion. Section 3 is devoted to generalize the framework and constructions of optimal linear codes over 𝔽_p in <cit.>. In Section 4, we present criteria for determining (2,p-2), (2,p-1), and (2,p)-locality of p-ary linear codes constructed by eliminating elements from a complete projective space. From which we can get some alphabet-optimal LRCs over 𝔽_p. Note that in term of locality, the results can be generalized to q-ary codes without any difficulties. In Section 5, we discuss the (r,δ)-locality with availability of the linear codes constructed in Section 3, a sufficient condition for these linear codes to be alphabet-optimal is provided. Finally, Section 6 concludes this paper. § PRELIMINARIES §.§ A general framework of constructing linear codes In this subsection, we introduce a general construction of linear codes and some basic results about additive characters and projective spaces over finite fields. For any vector x = (x_1,…, x_n) ∈_p^n, define the Hamming weight of x as (x) = |{i ∈ [n] : x_i ≠ 0}|. For a linear code 𝒞, let A_i denote the number of codewords in 𝒞 with weight i. The sequence (A_0, A_1, …, A_n) is called the weight distribution of 𝒞. The weight enumerator of 𝒞 is defined as 1 + A_1z + A_2z^2 + … + A_nz^n. In <cit.>, Ding et al. proposed a universal framework of constructing linear codes based on trace function and a nonempty subset D = {d_1,…, d_n}⊂_p^m. By employing this framework, a p-ary linear code of length n can be formed as follows: 𝒞_D={c_x=((xd_1),…,(xd_n)): x∈_p^m}, where (·) is the trace function from 𝔽_p^m to 𝔽_p given by (y)=y+y^p+…+y^p^m-1. Here, c_x represents the codeword corresponding to the element x in the finite field _p^m. The subset D is referred to as the defining set of the linear code 𝒞_D. Assume that ξ_p is a primitive p-th root of complex unity. For any a ∈𝔽_p^m, an additive character of 𝔽_p^m is defined as the function χ_a(x) = ξ_p^ tr(ax), where x ∈𝔽_p^m. All additive characters of 𝔽_p^m form a group of order p^m with operation χ_a+b(x)=χ_a(x)χ_b(x). The famous orthogonal relation of additive characters is given as follows: ∑_x∈_p^mχ_a(x)={[ 0, ,; p^m, . ]. Suppose that {α_1, …, α_m} is a basis of _p^m over _p, then there exists a unique basis {β_1, …, β_m} of _p^m over _p satisfying tr(α_iβ_j)={[ 1, ,; 0, , ]. for any 1 ≤ i , j ≤ m. We call {β_1, …, β_m} the dual basis of {α_1, …, α_m}. For any x, y ∈_p^m, we can represent them by x = ∑^m_i=1 x_iα_i and y =∑^m_i=1 y_iβ_i, where x_i , y_i ∈_p for any i ∈ [m]. Then, x and y can be expressed as vectors 𝐱 = (x_1, x_2, … , x_m) and 𝐲 = (y_1, y_2, … , y_m) in ^m_p, respectively. The Euclidean inner product of 𝐱 and 𝐲 is defined as 𝐱·𝐲 = ∑^m_i=1x_i y_i. It can be easily verified that (xy) = 𝐱·𝐲. Consequently, we can express the additive character χ_y(x) of _p^m as χ_y(x) = ξ_p^𝐱·𝐲. Based on the above discussion, the previously defined linear code 𝒞_D is equivalent to 𝒞_𝒟={c_𝐱=(𝐱·𝐝_1,…,𝐱·𝐝_n): 𝐱∈_p^m}, where 𝒟 = {𝐝_1, … , 𝐝_n}⊂^m_p is referred to as the defining set of 𝒞_𝒟. The matrix G=[𝐝_1^T,𝐝_2^T,…,𝐝_n^T] can be regarded as a generator matrix of 𝒞_𝒟, and the rank of G is equal to the dimension of 𝒞_𝒟. In <cit.>, Luo et al. introduced new constructions of Griesmer codes and distance-optimal linear codes by considering the defining set as the complement of the union of certain projective subspaces within a projective space. We will follow the notations in <cit.>. Let V be the m-dimensional vector space 𝔽_p^m. Two nonzero vectors 𝐱=(x_1,x_2,…, x_m) and 𝐲=(y_1 , y_2 , …, y_m) in V are said to be equivalent, denoted by x∼y, if 𝐲 = λ𝐱 for some λ in 𝔽_p^*. The relation ∼ is indeed an equivalence relation. Denote (x_1 : x_2 : …: x_m) the equivalent class consists of all nonzero scalar multiples of (x_1 , x_2 , …, x_m). The set of all equivalent classes in V is a projective space over 𝔽_p with dimension m-1, termed the projective space of V. The elements of a projective space are called points. For every point (x_1 : x_2 : …: x_m) in the projective space of V, we can use arbitrary nonzero scalar multiple of (x_1 , x_2 , …, x_m) to express the point. Let 𝒜 be a nonempty subset of [m]. Define an |𝒜|-dimensional vector space over _p by L_𝒜={(a_1, … , a_n):a_i∈_p  if  i∈𝒜 and  a_i=0  if i∉𝒜}. Assume that P_𝒜 is the projective space of L_𝒜. For convenience, we assign the expression of every point in P_𝒜 as the vector of _p^m in the corresponding equivalent class whose first nonzero coordinate is 1. In this way, P_𝒜 can be regarded as a subset of L_𝒜. It is easy to check that |P_𝒜| = p^|𝒜|-1/p-1 and L_𝒜∖{0} = ⋃_a∈_p^*aP_𝒜. Obviously, L_𝒜∖{0} = P_𝒜 if p = 2. For any two subsets 𝒜_1,𝒜_2 of [m], the intersection of P_𝒜_1 and P_𝒜_2 is equal to P_𝒜_1∩𝒜_2, where P_∅ = ∅. §.§ Modifying the framework As we can see from (<ref>), the defining set in the original framework is required to be a subset of ^m_p. In this subsection, we modify the framework by allowing defining set to be a multi-set consisting of vectors from ^m_p. (Modified framework) Suppose that s is a positive integer. Let 𝒟_1, 𝒟_2,…, 𝒟_s be subsets of ^m_p. We can define a p-ary linear code by 𝒞_(𝒟_1,𝒟_2,…,𝒟_s)={c_𝐱=(𝐱 G_1,…,𝐱 G_s): x∈_p^m}, where G_i denotes the matrix whose columns are transpose of vectors of 𝒟_i for all i∈ [s]. Following the approach outlined in <cit.>, we can consider each 𝒟_i(1≤ i ≤ s) as the complement of unions of specific projective subspaces within a projective space. Consequently, the construction of good linear codes can be simplified to design suitable subsets of [m]. Suppose that t > 1 is an integer. Let E={ℰ_1, ℰ_2,…, ℰ_t} be a multi-set with elements being nonempty subsets of [m], we call set ⋃_1≤ i<j≤ t(ℰ_i ∩ℰ_j) the center of E, denoted by Center(E). (Property I_s) Suppose that ℓ > 1 is a positive integer, 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m]. If we can partition the multi-set A={𝒜_i}_i=1^ℓ into the form as A=B_1∪ B_2∪⋯∪ B_s, where B_j={ℬ_1^(j), ℬ_2^(j),…, ℬ_ℓ_j^(j)} for any 1≤ j≤ s, s, ℓ_1,ℓ_2,…,ℓ_s are positive integers satisfying ℓ=∑_i=1^sℓ_i, such that 𝒜_i∖⋃_j=1^s Center(B_j)≠∅ for any 1≤ i ≤ℓ, then 𝒜_1, 𝒜_2,…, 𝒜_ℓ are said to satisfy Property I_s. In <cit.>, the authors initially required that 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying 𝒜_i ∖⋃_j∈ [ℓ]∖{i}𝒜_j≠∅ for every i ∈ [ℓ]. These requirements are in fact equivalent to Property I_s with s=1 by the following lemma. Suppose that ℓ > 1 is a positive integer. Let 𝒜_1, 𝒜_2,…, 𝒜_ℓ be nonempty subsets of [m], and let A be the multi-set {𝒜_i}_i=1^ℓ. For any i∈ [ℓ], 𝒜_i ∖⋃_j∈ [ℓ]∖{i}𝒜_j≠∅ if and only if 𝒜_i∖ Center(A)≠∅. For any i∈ [ℓ], we have 𝒜_i∖ Center(A) = 𝒜_i∖⋃_1≤ j<k≤ℓ(𝒜_j ∩𝒜_k) = 𝒜_i∖((𝒜_i∩⋃_j∈ [ℓ]∖{i}𝒜_j)∪ Center(A∖{𝒜_i}) ) = 𝒜_i∖(⋃_j∈ [ℓ]∖{i}𝒜_j∪ Center(A∖{𝒜_i}) ) =𝒜_i∖⋃_j∈ [ℓ]∖{i}𝒜_j, where the last equation comes from Center(A∖{𝒜_i}) ⊂⋃_j∈ [ℓ]∖{i}𝒜_j. The proof is completed. §.§ Some auxiliary lemmas For 𝒟⊂^m_p, 𝒜⊂ [m] and 𝐱∈^m_p, let χ_𝐱(𝒟) = ∑_𝐲∈𝒟ξ_p^𝐱·𝐲 and let 𝐱_𝒜 be a vector obtained from 𝐱 by removing the coordinates in [m] ∖𝒜. The following two lemmas play a fundamental role in <cit.> and are also relevant to our proofs. <cit.> Assume that 𝒜_1 and 𝒜_2 are subsets of [m] such that they do not contain each other. Let P_𝒜_i be the projective space of L_𝒜_i defined as in (<ref>), i = 1, 2. Then, for any 𝐱∈𝔽_p^m*, we have wt(c_𝐱) ={[ p^|𝒜_1|+P^|𝒜_2|-P^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1=0,𝐱_𝒜_2=0,; p^|𝒜_1|-P^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1=0,𝐱_𝒜_2≠0,; p^|𝒜_2|-P^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1≠0,𝐱_𝒜_2=0,; -p^|𝒜_1∩𝒜_2|-1, if 𝐱_𝒜_1≠0,𝐱_𝒜_2≠0, 𝐱_𝒜_1∩𝒜_2=0,; -1, if 𝐱_𝒜_1≠0,𝐱_𝒜_2≠0, 𝐱_𝒜_1∩𝒜_2≠0. ]. <cit.> Suppose that ℓ > 1 is a positive integer. Let 𝒜_1,𝒜_2, … ,𝒜_ℓ be nonempty subsets of [m] satisfying 𝒜_i ∖⋃_j∈ [ℓ]∖{i}𝒜_j≠∅ for any i ∈ [ℓ]. Then min{∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓP_𝒜_i)):𝐱∈_p^m*}=-1+∑_k=2^ℓ(-1)^k-1∑_1≤ i_1<…<i_k≤ℓp^|⋂_j=1^k𝒜_i_j| . In <cit.>, the authors also pointed out that ∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓP_𝒜_i)) =-1+∑_k=2^ℓ(-1)^k-1∑_1≤ i_1<…<i_k≤ℓp^|⋂_j=1^k𝒜_i_j| if and only if 𝐱_𝒜_i≠0 for all i ∈ [ℓ] and 𝐱_𝒜_i_1∩𝒜_i_2 = 0 for all 1 ≤ i_1 < i_2 ≤ℓ. Next, we generalize Lemma <ref> to the case of s≥ 1. Suppose that ℓ > 1, s are positive integers. If 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying Property I_s, let B_j={ℬ_i^(j)}_i=1^ℓ_j for all 1≤ j≤ s are defined as in (<ref>), then min{∑_j=1^s∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_jP_ℬ_i^(j))):𝐱∈_p^m*} = ∑_r=1^s(-1+∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|). From Lemma <ref> we know min{∑_j=1^s∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_jP_ℬ_i^(j))):𝐱∈_p^m*} = ∑_r=1^s(-1+∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|) if and only if ∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_rP_ℬ_i^(r)))=-1+∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)| for all 1≤ r≤ s, if and only if the following conditions are satisfied simultaneously, (i) 𝐱_ℬ_j^(i)≠0 for any j∈ [ℓ_i] and i ∈ [s], (ii) 𝐱_ℬ_i_1^(u)∩ℬ_i_2^(u)= 0 for any 1 ≤ i_1 < i_2 ≤ℓ_u and 1≤ u≤ s. The above conditions are equivalent to (i)^* 𝐱_𝒜_i≠0 for any i ∈ [ℓ], (ii)^* 𝐱_∪_j=1^s Center(B_j)= 0. Such 𝐱 always exists due to Property I_s. § NEW CONSTRUCTIONS OF OPTIMAL LINEAR CODES In <cit.>, by setting the defining set to be complement of unions of some projective subspaces within P_[m], the authors obtained some Griesmer codes and distance-optimal codes with respect to the Griesmer bound. In this section, we generalize the construction in <cit.> by using (<ref>) and subsets of [m] satisfying Property I_s. Specifically, we require each 𝒟_i to be the complement of unions of some projective subspaces within P_[m]. Let p be an odd prime, m, ℓ > 1 and s be positive integers. Suppose that 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying Property I_s, and B_j={ℬ_i^(j)}_i=1^ℓ_i for all j ∈ [s] are defined as in (<ref>). Let 𝒟_i=P_[m]∖⋃_j=1^ℓ_i P_ℬ_j^(i), 𝒟_i^c=⋃_j=1^ℓ_i P_ℬ_j^(i) for every i ∈ [s]. If p^m-1>∑_i=1^ℓ_rp^|ℬ_i^(r)|-1 for all r ∈ [s], then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is a linear code over _p with parameters [sp^m-1/p-1-∑_r=1^s|𝒟_r^c|,m,sp^m-1-∑_i=1^ℓp^|𝒜_i|-1], where |𝒟_r^c|=∑_k=1^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1/p-1, r=1,…,s. From the principle of inclusion and exclusion (PIE), we get the form of |𝒟_r^c| for each r ∈ [s] directly. For any 𝐱∈_p^m*, by the orthogonal relation of additive characters, we have wt(c_𝐱) = ∑_r=1^s(p^m-1/p-1-|𝒟_r^c|-|{𝐝∈𝒟_r: 𝐱·𝐝=0}|) (a)=∑_r=1^s((p^m-1/p-1-|𝒟_r^c|)p-1/p+1/p+1/p∑_y∈_p^*χ_𝐱(y(⋃_i=1^ℓ_rP_ℬ_i^(r)))), where (a) can be found in the proof of Theorem 3.1 of <cit.>. It then follows from Lemma <ref> that the minimum value of wt(c_𝐱) for any 𝐱∈_p^m* is ∑_r=1^s(p^m-1- ∑_i=1^ℓ_rp^|ℬ_i^(r)|-1)=sp^m-1-∑_i=1^ℓp^|𝒜_i|-1. So the minimum distance of 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is sp^m-1 -∑_i=1^ℓp^|𝒜_i|-1>0. It is evident that wt(c_𝐱)= 0 if and only if 𝐱 = 0, hence 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) has dimension m. Next, we will discuss the optimality of the linear codes given by Theorem <ref> with respect to the Griesmer bound. We follow the notations in <cit.>. Let 𝒜_1, 𝒜_2,…, 𝒜_ℓ be subsets of [m]. Assume that |𝒜_1| = … = |𝒜_i_1| =s_1, |𝒜_i_1+1| = …= |𝒜_i_2| = s_2,…, |𝒜_i_t-1+1| = … = |𝒜_ℓ| = s_t, where s_1 < s_2 < … < s_t and t ≤ℓ. Then ∑_i=1^ℓ p^|𝒜_i| = ∑_i=1^ta_ip^s_i, where a_i denotes the number of subsets of size s_i in 𝒜_1, 𝒜_2,…, 𝒜_ℓ for any i ∈ [t]. Put M(𝒜_1, 𝒜_2,…, 𝒜_ℓ) = max{a_i : i = 1,…, t}. Suppose that P(∑_i=1^ℓ p^|𝒜_i|-1) =∑_i=g^hb_ip^i is the p-adic expansion of ∑_i=1^ℓ p^|𝒜_i|-1 with coefficients b_i in {0, 1,…, p-1} and b_g≠ 0, b_h ≠ 0. Let C(∑_i=1^ℓp^|𝒜_i|-1)=∑_i=g^hb_i and let v_p(∑_i=1^ℓp^|𝒜_i|-1) be the p-adic valuation of ∑_i=1^ℓp^|𝒜_i|-1. It is easy to see that v_p(∑_i=1^ℓp^|𝒜_i|-1)= g. Let the notation be the same as in Theorem <ref>. (1)Then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is a Griesmer code if and only if ℬ_1^(r),ℬ_2^(r), … ,ℬ_ℓ_r^(r) are mutually disjoint for each r∈ [s] and M(𝒜_1,𝒜_2, … ,𝒜_ℓ) ≤ p- 1. (2)If ∑_r=1^s|𝒟_r^c|> ∑_i=1^ℓ p^|𝒜_i|-C(∑_i=1^ℓ p^|𝒜_i|-1)/p-1-v_p(∑_i=1^ℓp^|𝒜_i|-1)-1, then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is distance-optimal with respect to the Griesmer bound. (1) Note that 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is a p-ary linear code with parameters [sp^m-1/p-1-∑_r=1^s|𝒟_r^c|,m,sp^m-1-∑_i=1^ℓp^|𝒜_i|-1]. From ∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉ = ∑_j=0^m-1⌈sp^m-1-P(∑_i=1^ℓp^|𝒜_i|-1)/p^j⌉ =∑_j=0^m-1⌈sp^m-1-∑_i=g^hb_ip^i/p^j⌉ =s∑_j=0^m-1p^m-1-j-∑_i=g^hb_i(∑_j=0^ip^i-j) =sp^m-1/p-1-∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1 , we know that 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is a Griesmer code if and only if ∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1 = ∑_r=1^s(∑_k=1^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1)/p-1, i.e., C(∑_i=1^ℓp^|𝒜_i|-1)= -∑_r=1^s(∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1). For simplicity, we use LHS to denote the left hand side of equation (<ref>), and RHS to denote the right hand side of equation (<ref>). Then we can rewrite (<ref>) as LHS= RHS. It can be easily seen that for each r∈ [s], |𝒟_r|=∑_k=1^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1/p-1≤∑_i=1^ℓ_rp^|ℬ_i^(r)|-ℓ_r/p-1, which implies that ∑_k=2^ℓ_r(-1)^k-1∑_1≤ i_1<…<i_k≤ℓ_rp^|⋂_j=1^kℬ_i_j^(r)|-1≤ -ℓ_r, where the equality holds if and only if ℬ_1^(r),ℬ_2^(r), … ,ℬ_ℓ_r^(r) are mutually disjoint. Hence, RHS≥∑_r=1^sℓ_r=ℓ. On the other hand, we observe that LHS=C(∑_i=1^ℓp^|𝒜_i|-1)=∑_i=g^hb_i≤ℓ, where the equality holds if and only if M(𝒜_1,𝒜_2, … ,𝒜_ℓ) ≤ p-1. In summary, we have ℓ≥ LHS, and RHS≥ℓ. Therefore, (<ref>) holds if and only if RHS=ℓ and LHS=ℓ, if and only if ℬ_1^(r),ℬ_2^(r), … ,ℬ_ℓ_r^(r) are mutually disjoint for every 1≤ r≤ s and M(𝒜_1,𝒜_2, … ,𝒜_ℓ) ≤ p- 1. (2) For any positive integer t, ∑_j=0^m-1 ⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1+t/p^j⌉ ≥∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1+1/p^j⌉ =sp^m-1/p-1-∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1+v_p(∑_i=1^ℓp^|𝒜_i|-1)+1. Due to ∑_r=1^s|𝒟_r^c|>∑_i=1^ℓ p^|𝒜_i|-C(∑_i=1^ℓ p^|𝒜_i|-1)/p-1-v_p(∑_i=1^ℓp^|𝒜_i|-1)-1, we have sp^m-1/p-1-∑_r=1^s|𝒟_r^c| < sp^m-1/p-1-∑_i=1^ℓp^|𝒜_i|-C(∑_i=1^ℓp^|𝒜_i|-1)/p-1+v_p(∑_i=1^ℓp^|𝒜_i|-1)+1 ≤∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1+t/p^j⌉. According to the Griesmer bound, there is no p-ary [sp^m-1/p-1-∑_r=1^s|𝒟_r^c|,m,d> sp^m-1-∑_i=1^ℓp^|𝒜_i|-1] linear code. Therefore, the linear code 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) with parameters [sp^m-1/p-1-∑_r=1^s|𝒟_r^c|, m, sp^m-1-∑_i=1^ℓp^|𝒜_i|-1] is distance-optimal with respect to the Griesmer bound. Theorems 3.1 and 3.2 in <cit.> can be regarded as special cases of Theorems <ref> and <ref> with s=1, respectively. Suppose that 𝒜_1, 𝒜_2,…, 𝒜_ℓ are nonempty subsets of [m] satisfying Property I_1, then for any integer t≥ 1, the t-copies of 𝒜_1, 𝒜_2,…, 𝒜_ℓ satisfy Property I_t. From Theorem <ref>, we can derive that the r-repetition of Griesmer codes constructed from Theorem 3.2 in <cit.> are also Griesmer codes, where 1≤ r ≤⌊p-1/M(𝒜_1,𝒜_2, … ,𝒜_ℓ)⌋. In the follows, we will give two corollaries of Theorems <ref> and <ref> by considering the case of s=2, to show that it is possible to construct good linear codes with new parameters, and that sometimes the weight distribution can be easily determined. Let p≥ 3 be an odd prime and let m be a positive integer. Suppose that 𝒜_1, 𝒜_2 𝒜_3, 𝒜_4 are nonempty subsets of [m] such that (i) 𝒜_3⊆𝒜_1, 𝒜_4⊆𝒜_2, (ii) 𝒜_1∩𝒜_2=∅, (iii) M(𝒜_1, 𝒜_2,𝒜_3, 𝒜_4)≤ p-1, and (iv) p^m > p^|𝒜_1| + p^|𝒜_2|, p^m > p^|𝒜_3|+ p^|𝒜_4|. If 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4), then 𝒞_(𝒟_1,𝒟_2) constructed by (<ref>) is a p-ary [2p^m-∑_i=1^4p^|𝒜_1|+2/p-1 ,m, 2p^m-1-∑_i=1^4p^|𝒜_i|-1] Griesmer code, whose weight distribution is listed in Table <ref>. From (i)-(ii), we can check that 𝒜_1, 𝒜_2, 𝒜_3, 𝒜_4 satisfy Property I_2. Together with (iii)-(iv), it follows from Theorem <ref> that 𝒞_(𝒟_1,𝒟_2) is a Griesmer code over _p. For any 𝐱∈^m*_p , the weight of a codeword c_𝐱 is (c_𝐱)=2p^m-1-∑_i=1^4p^|𝒜_i|-1+ 4/p+1/p∑_y∈_p^*χ_𝐱(y(P_𝒜_1∪ P_𝒜_2))+1/p∑_y∈_p^*χ_𝐱(y(P_𝒜_3∪ P_𝒜_4)). According to Lemma <ref>, we have (c_𝐱) = {[ 2p^m-1, if 𝐱_𝒜_1=0, 𝐱_𝒜_2=0,; 2p^m-1-p^|𝒜_2|-1, if 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0, 𝐱_𝒜_4=0,; 2p^m-1-p^|𝒜_2|-1-p^|𝒜_4|-1, if 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0, 𝐱_𝒜_4≠0,; 2p^m-1-p^|𝒜_1|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0,𝐱_𝒜_3=0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_3|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0, 𝐱_𝒜_3≠0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3=0, 𝐱_𝒜_4=0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_4|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3=0, 𝐱_𝒜_4≠0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3≠0, 𝐱_𝒜_4=0,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1, if 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3≠0, 𝐱_𝒜_4≠0. ]. The multiplicity corresponding to each weight follows. Below we give an example to illustrate Corollary <ref>. Let p = m = 3, 𝒜_1 = {1, 2}, 𝒜_2 = {3}, 𝒜_3 = {1}, 𝒜_4 = ∅. Let 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4), 𝒞_(𝒟_1,𝒟_2) be the 3-ary code constructed by (<ref>), from Corollary <ref>, we know 𝒞_(𝒟_1,𝒟_2) is a Griesmer code [20,3,13]_3 with weight enumerator 1+12x^13+10x^14+2x^15+2x^17. Let p be an odd prime and m a positive integer. Suppose that 𝒜_1, 𝒜_2, 𝒜_3, 𝒜_4 are nonempty subsets of [m] such that (i) 𝒜_3⊆𝒜_1, 𝒜_4⊆𝒜_2, (ii) 𝒜_1∩𝒜_2≠∅, (iii) 𝒜_3⊈𝒜_1∩𝒜_2, 𝒜_4⊈𝒜_1∩𝒜_2, and (iv) p^m > p^|𝒜_1| + p^|𝒜_2|. If 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4), then 𝒞_(𝒟_1,𝒟_2) constructed by (<ref>) is a p-ary linear code with parameters [2p^m-∑_i=1^4p^|𝒜_i|+p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1 ,m, 2p^m-1 -∑_i=1^4 p^|𝒜_i|-1]. Furthermore, without loss of generality, let |𝒜_3|=min{|𝒜_i|: i=1,2,3,4}, then 𝒞_(𝒟_1,𝒟_2) is distance-optimal with respect to the Griesmer bound if any one of the following conditions is satisfied. (1) |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|-2/p-1, if M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)≤ p-1. (2) |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1, if p=3, |𝒜_1|=|𝒜_2|=|𝒜_4|>|𝒜_3|. (3) |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1-1, if p=3, |𝒜_1|>|𝒜_2|=|𝒜_3|=|𝒜_4|. (4) |𝒜_3|>p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|/p-1-1, if p=3, |𝒜_1|=|𝒜_2|=|𝒜_3|=|𝒜_4|. From conditions (i)-(iv) and Theorem <ref>, the parameters of 𝒞_(𝒟_1,𝒟_2) follows. (1) When M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)≤ p-1, it is easy to see that C(∑_i=1^4p^|𝒜_i|-1)=4, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|-1. Due to |𝒜_3| >p^|𝒜_1∩𝒜_2|+p^|𝒜_3∩𝒜_4|-2/p-1, we obtain that |𝒟_1^c|+|𝒟_2^c| =∑_i=1^4p^|𝒜_i|-p^|𝒜_1∩𝒜_2|-p^|𝒜_3∩𝒜_4|-2/p-1 >∑_i=1^4p^|𝒜_i|-4/p-1-|𝒜_3| = ∑_i=1^4 p^|𝒜_i|-C(∑_i=1^4 p^|𝒜_i|-1)/p-1-v_p(∑_i=1^4p^|𝒜_i|-1)-1. According to Theorem <ref>, 𝒞_(𝒟_1,𝒟_2) is distance-optimal with respect to the Griesmer bound. (2-4) Since M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)≤ 4, M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)> p-1 if and only if p=3. M(𝒜_1,𝒜_2,𝒜_3,𝒜_4)=3 ⇔{[ |𝒜_1|=|𝒜_2|=|𝒜_4|>|𝒜_3|; |𝒜_1|>|𝒜_2|=|𝒜_3|=|𝒜_4| ]. ⇔{[ C(∑_i=1^4p^|𝒜_i|-1)=2, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|-1.; C(∑_i=1^4p^|𝒜_i|-1)=2, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|. ]. M(𝒜_ 1,𝒜_2,𝒜_3,𝒜_4)=4 ⇔ |𝒜_1|=|𝒜_2|=|𝒜_4|=|𝒜_3| ⇔ C(∑_i=1^4p^|𝒜_i|-1)=2, v_p(∑_i=1^4p^|𝒜_i|-1)=|𝒜_3|. These three circumstances corresponding to (2-4) respectively, the remaining proofs of (2-4) can be done similarly to (1). In general, it is complicated to calculate the weight distribution of code constructed in Corollary <ref>, as we need to consider the following 15 cases: {[ 𝐱_𝒜_1=0, 𝐱_𝒜_2=0,; 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0,{[ 𝐱_𝒜_4=0,; 𝐱_𝒜_4≠0, ].; 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0,{[ 𝐱_𝒜_3=0,; 𝐱_𝒜_3≠0, ].; 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, {[ 𝐱_𝒜_3=0,𝐱_𝒜_4=0, {[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3=0,𝐱_𝒜_4≠0, {[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3≠0, 𝐱_𝒜_4=0, {[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3≠0, 𝐱_𝒜_4≠0, {[ 𝐱_𝒜_3∩𝒜_4=0, {[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0, ].; 𝐱_𝒜_3∩𝒜_4≠0, {[ 𝐱_𝒜_1∩𝒜_2=0,; 𝐱_𝒜_1∩𝒜_2≠0. ]. ]. ]. ]. Correspondingly, (c_𝐱) is equal to {[ 2p^m-1,; 2p^m-1-p^|𝒜_2|-1,; 2p^m-1-p^|𝒜_2|-1-p^|𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_3|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1+p^|𝒜_1∩𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_4|-1+p^|𝒜_1∩𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1+p^|𝒜_1∩𝒜_2|,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1+p^|𝒜_1∩𝒜_2|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1+p^|𝒜_3∩𝒜_4|-1,; 2p^m-1-p^|𝒜_1|-1-p^|𝒜_2|-1-p^|𝒜_3|-1-p^|𝒜_4|-1+p^|𝒜_3∩𝒜_4|-1+p^|𝒜_1∩𝒜_2|-1. ]. We will not give the explicit expression of the weight distribution here, but we will illustrate the computation with an example. Let p = 3, m=4 and 𝒜_1 = {1, 2, 3}, 𝒜_2 = {3, 4}, 𝒜_3 = {1, 2}, 𝒜_4 = {3, 4}. Let 𝒟_1 = P_[m]∖ (P_𝒜_1∪ P_𝒜_2), 𝒟_2 = P_[m]∖ (P_𝒜_3∪ P_𝒜_4), 𝒞_(𝒟_1,𝒟_2) be the 3-ary code constructed by (<ref>). From Corollary <ref>, we know that the parameters of 𝒞_(𝒟_1,𝒟_2) are [56,4,36]_3. Referring to the above analysis, for 𝐱∈_3^4*, we have (c_𝐱)={[ 2·3^3-3-3, 𝐱_𝒜_1=0, 𝐱_𝒜_2≠0,; 2·3^3-3^2, 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0, 𝐱_𝒜_3=0,; 2·3^3-3^2-3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2=0, 𝐱_𝒜_3≠0,; 2·3^3-3^2-3-3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3=0,𝐱_𝒜_1∩𝒜_2=0,; 2·3^3-3^2-3-3+3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3=0,𝐱_𝒜_1∩𝒜_2≠0,; 2·3^3-3^2-3-3-3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3≠0,𝐱_𝒜_1∩𝒜_2=0,; 2·3^3-3^2-3-3-3+3, 𝐱_𝒜_1≠0, 𝐱_𝒜_2≠0, 𝐱_𝒜_3≠0,𝐱_𝒜_1∩𝒜_2≠0. ]. So 𝒞_(𝒟_1,𝒟_2) here is a 5-weight code with nonzero weights {36,39,42,45,48}. According to the best known linear code from the Magma BKLC (_3, 56, 4) with weight enumerator 1+76x^36+4x^45, it can be seen that 𝒞_(𝒟_1,𝒟_2) is a new distance-optimal linear code. § ALPHABET-OPTIMAL (R,Δ)-LRCS In this section, we will revisit the locality of codes constructed in <cit.>, i.e., the codes in Theorem <ref> with s=1. It turns out that the constructed codes in <cit.>, which are alphabet-optimal 2-LRCs, can also be characterized as alphabet-optimal (2,p-1)-LRCs. Furthermore, we will investigate the conditions under which the codes constructed from (<ref>) possess (2, p) and (2, p-2) localities. Some new alphabet-optimal (r,δ)-LRCs are also provided. Recall that P_[m] = {(1, a_2, a_3, … , a_m) : a_2, … , a_m ∈_p}∪ {(0, 1, a_3, … , a_m) : a_3, … , a_m ∈_p}∪…∪{(0, 0, … , 1)}. Let P_[m]={(1, a_2, … , a_m) : a_2, … , a_m ∈_p^*}. When p^m > ∑_i=1^ℓ p^|𝒜_i|, which implies that 𝒜_1,𝒜_2, … ,𝒜_ℓ are proper subsets of [m], then P_[m] is a subset of 𝒟= P_[m]∖⋃_i=1^ℓP_𝒜_i, as P_[m]∩ P_𝒜_i=∅ for any 1≤ i≤ℓ. Next, we will give a formal definition of (r,δ)-LRCs from the aspect of generator matrix. Let 𝒞 be a p-ary linear code with generator matrix G = [𝐠_1, …, 𝐠_n]. The i-th coordinate, 1 ≤ i ≤ n, of 𝒞 is said to have (r,δ)-locality if there exists a subset ℐ⊂{𝐠_1, …, 𝐠_n} containing 𝐠_i such that (1) |ℐ| ≤ r+δ-1, (2) any δ-1 vectors in ℐ are linear combinations of the remaining vectors in ℐ. If all the coordinates of 𝒞 have (r,δ)-locality, then 𝒞 is called an (r,δ)-locally repairable code, or in short, (r,δ)-LRC. As previously mentioned, P_[m] can be considered as a subset of L_[m]. Therefore, we can refer to the elements of P_[m] as vectors without any ambiguity. In P_[m], the first nonzero coordinate of any vector is 1, which means that the linear combination of vectors in P_[m] may not necessarily belong to P_[m]. For the sake of convenience in later discussions, for any 𝐟∈ L_[m], we denote by [𝐟] the vector equivalent to 𝐟 whose first nonzero coordinate is 1. §.§ (2,p-2)-LRCs In this subsection, we will explore the (2,p-2)-locality of p-ary codes constructed by (<ref>) via analysing the linear dependence among vectors in the defining sets. Let p≥ 5 be an odd prime, 𝔽_p={0,-1,α_1,α_2,…,α_p-2}. (1) For any 𝐠∈P_[m], there exists 𝐡∈P_[m], 𝐡≠𝐠, such that [𝐡+α_i𝐠]∈P_[m] for all i ∈ [p-2]∖{j}, where j is arbitrary chosen from [p-2]. (2) For any 𝐠∈ P_[m]∖P_[m], there exists 𝐡∈P_[m] such that [𝐡+α_i𝐠]∈P_[m] for all i ∈ [p-2]. (1) For 𝐠∈P_[m], we can write 𝐠 = (1, g_2,…, g_m), where g_i≠ 0 for every i∈{2,…,m}. For any j∈ [p-2], if we choose 𝐡=(1,-α_jg_2,…,-α_jg_m)∈P_[m], then [𝐡+α_i𝐠]=1/1+α_i(1+α_i, (α_i-α_j)g_2,…,(α_i-α_j)g_m)∈P_[m] for all i ∈ [p-2]∖{j}. (2) For 𝐠∈ P_[m]∖P_[m], denote i the position of first nonzero component of 𝐠, where 1≤ i≤ m. Then we can express 𝐠 as follows 𝐠 = (0,…,0^i-1,1, g_i+1,…, g_m). Since 𝐠∉P_[m], there is a subset Z⊆{i+1,i+2,…,m} such that g_j=0 for j∈ Z, g_j≠ 0 for j∈{i+1,i+2,…,m}∖ Z, where i-1+|Z|≥ 1. Let 𝐡 be an arbitrary element in {(1,…,1^i, h_i+1,…, h_m): h_r≠ 0   if  r∈ Z, h_r= g_r  if  r∈{i+1,i+2,…,m}∖ Z}. It is easy to check that 𝐡∈P_[m] and [𝐡+α_i𝐠]∈P_[m] for all i ∈ [p-2]. Below we give an example to illustrate Lemma <ref>. Let p=5, m=4, 𝔽_p={0,-1,α_1,α_2,…,α_p-2}={0,-1,1,2,3}. (1) For 𝐠 = (1, 2, 3, 4), let j=3, from Lemma <ref>, we can choose 𝐡=(1,4,1,3), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∉P_[4]. (2) For 𝐠 = (1, 0, 3, 4), from Lemma <ref>, we can choose 𝐡=(1,1,3,4), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4]. (3) For 𝐠 = (0, 1, 0, 4), from Lemma <ref>, we can choose 𝐡=(1,1,3,4), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4]. By combining Definition <ref> and Lemma <ref>, we can establish a criterion for determining the (2,p-2)-locality of codes constructed from (<ref>). Let p≥ 5 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. Suppose that 𝒜_1, 𝒜_2, …, 𝒜_ℓ are proper subsets of [m] such that they do not contain each other. Let 𝒟= P_[m]∖ (⋃_i=1^ℓ P_𝒜_i), then 𝒞_𝒟 constructed by (<ref>) is a (2,p-2)-LRC. Since 𝒜_1, 𝒜_2, …, 𝒜_ℓ are proper subsets of [m], P_[m]⊂𝒟. From Lemma <ref>, for each 𝐠∈P_[m], let j=p-2, then there exists 𝐡∈P_[m], 𝐡≠𝐠, and a set ℐ={𝐠,𝐡,[𝐡+α_1𝐠],…, [𝐡+α_p-3𝐠]}⊂P_[m]⊂𝒟 of size p-1 such that any 2 vectors in ℐ can be linearly combined to get the remaining vectors in ℐ. From Definition <ref>, coordinates occupied by P_[m] have (2,p-2)-locality. Similarly, we can prove that coordinates occupied by 𝒟∖P_[m] have (2,p-1)-locality. Overall, 𝒞_𝒟 is a (2,p-2)-LRC. The part (1) of Theorem 4.2 in <cit.> can be obtained by Theorem <ref> . §.§ (2,p-1)-LRCs In this subsection, after adding an extra requirement to the defining sets and utilizing the results in Subsection <ref>, we will determine the (2,p-1)-locality of p-ary codes constructed by (<ref>). Let p≥ 3 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. Suppose that 𝒜_1, 𝒜_2, …, 𝒜_ℓ are proper subsets of [m] such that they do not contain each other, let 𝒟= P_[m]∖ (⋃_i=1^ℓ P_𝒜_i). If there exists a subset 𝒜^*⊂ [m] with size m-1 such that 𝒜^*≠𝒜_i for all i∈ [ℓ], then 𝒞_𝒟 constructed by (<ref>) is a (2,p-1)-LRC. Let 𝔽_p={0,-1,α_1,α_2,…,α_p-2}. According to Lemma <ref> and Theorem <ref>, if we can show that for any 𝐠∈P_[m]⊂𝒟, there exists 𝐡∈𝒟 such that |{𝐠,𝐡,[𝐡-𝐠], [𝐡+α_1𝐠], …, [𝐡+α_p-2𝐠]}∩𝒟|≥ p, the proof is done. For simplicity, define [𝐠,𝐡]:={𝐠,𝐡,[𝐡-𝐠], [𝐡+α_1𝐠], …, [𝐡+α_p-2𝐠]}. We can see that in [𝐠,𝐡], any 2 vectors can be linearly combined to obtain all the remaining vectors. Next, we prove the theorem by considering the following two cases. Case (i): Suppose 𝒜^*={2,3,…,m}. For any 𝐠=(1,g_2,…,g_m)∈P_[m], where g_i≠ 0 for all 2≤ i≤ m, let 𝐡=(h_1,h_2,…,h_m)=g_2^-1(0,g_2,…,g_m). For each i∈ [ℓ], since 𝒜^*∖𝒜_i≠∅, there exists t_i∈𝒜^* such that t_i∉𝒜_i. As h_t_i≠ 0, we have 𝐡∉ P_𝒜_i for any i∈ [ℓ]. Hence 𝐡∉⋃_j=1^ℓP_𝒜_j, i.e., 𝐡∈𝒟. It is easy to verify that [𝐡+(-g_2^-1𝐠)]=(1,0,…,0) is the only possible element in [𝐠,𝐡] which does not belong to 𝒟, and all the remaining elements in [𝐠,𝐡] belong to P_[m]. So |[𝐠,𝐡]∩𝒟|≥ p. Case (ii): Suppose 𝒜^*=[m]∖{j} for some j∈{2, …, m}. For any 𝐠=(1,g_2,…,g_m)∈P_[m], where g_i≠ 0 for all 2≤ i≤ m, let 𝐡=(1,h_2, h_3,…,h_m) with h_i=g_i for i∈𝒜^*∖{1}, and h_j=0. Similarly, we can check that 𝐡∈𝒟, [𝐡-𝐠] = (0,…,0^j-1,1, 0,…, 0) is the only possible element in [𝐠,𝐡] which does not belong to 𝒟, and all the remaining elements in [𝐠,𝐡] belong to P_[m]. So |[𝐠,𝐡]∩𝒟|≥ p. In summary, the proof is completed. The part (2) of Theorem 4.2 in <cit.> is a special case of Theorem <ref> with p=3. Below we give an example to illustrate Theorem <ref>. Let p=5, m=4, 𝔽_p={0,-1,α_1,α_2,α_3}={0,-1,1,2,3}. (1) For 𝒜^*={2,3,4}, 𝐠 = (1, 2, 3, 4), from Theorem <ref>, we can choose 𝐡=(0,1,4,2), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4], [𝐡-𝐠]∈P_[4]. (2) For 𝒜^*={1,3,4}, 𝐠 = (1, 2, 3, 4), from Theorem <ref>, we can choose 𝐡=(1,0,3,4), and one can check that [𝐡+𝐠]∈P_[4], [𝐡+2𝐠]∈P_[4], [𝐡+3𝐠]∈P_[4]. By combining Theorems <ref> and <ref>, we can provide a construction of p-ary alphabet-optimal (2,p-1)-LRCs which are Griesmer codes. Let p≥ 3 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. Assume 𝒜_1,𝒜_2,…,𝒜_ℓ are mutually disjoint subsets of [m] and M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1, let 𝒟= P_[m]∖ (⋃_i=1^ℓP_𝒜_i). Then 𝒞_𝒟 constructed by (<ref>) is an alphabet-optimal (2,p-1)-LRC with parameters [p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1 ,m, p^m-1 -∑_i=1^ℓp^|𝒜_i|-1]. Since 𝒜_1,𝒜_2,…,𝒜_ℓ are mutually disjoint and M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1, we know that there is at most one subset among {𝒜_i}_i=1^ℓ, say 𝒜_j, has size m-1, which means that there is at least a subset 𝒜^*⊂ [m] with size m-1 such that 𝒜^*≠𝒜_i for all i∈ [ℓ]. From Theorems <ref> and <ref>, 𝒞_𝒟 is a p-ary [n,k,d] Griesmer code with (2,p-1)-locality, where n=p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1, k=m, d=p^m-1 -∑_i=1^ℓp^|𝒜_i|-1. Since 𝒜_1,𝒜_2,…,𝒜_ℓ are mutually disjoint, we have ∑_i=1^ℓp^|𝒜_i|-1≤ p^m-2+1, then ⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-2⌉≥ p-1. Note that n=∑_j=0^m-1⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉. Thus ∑_j=0^m-2⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉= n-1, ∑_j=0^m-3⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉≤ n-p. Thanks to the Griesmer bound, k_ opt^(p)(n -p, d) = m -2. Utilizing the bound of (<ref>) with t = 1, we get that k ≤ 2 + k_ opt^(p)(n-p, d) = m. Therefore, the linear code 𝒞_𝒟 achieves the bound of (<ref>). In <cit.>, the authors showed that code 𝒞_𝒟 constructed in Theorem <ref> is an alphabet-optimal 2-LRC, here we prove that 𝒞_𝒟 is actually an alphabet-optimal (2,p-1)-LRC, our results are more concise. Next, we give a construction of alphabet-optimal (2,p-1)-LRCs which are not Griesmer codes. Let p≥ 3 be an odd prime and m=3. Let 𝒜_1={1,2} and 𝒜_2={2,3}. Let 𝒟^c = P_𝒜_1∪ P_𝒜_2 and 𝒟= P_[m]∖𝒟^c, then C_𝒟 constructed by (<ref>) is a p-ary alphabet-optimal (2,p-1)-LRC with parameters [p^2-p ,3, p^2 - 2p]. From Theorems <ref> and <ref>, 𝒞_𝒟 is a p-ary [n,k,d] code with (2,p-1)-locality, where n=p^2-p, k=3, d=p^2 - 2p. Utilizing the bound of (<ref>) with t = 1, we get that k ≤ 2 + k_ opt^(p)(p^2-2p, p^2-2p) = 3. Therefore, the linear code 𝒞_𝒟 achieves the bound of (<ref>). Below we give an example to illustrate Theorem <ref>. Let p=5, m=3. Assume that 𝒜_1={1,2} and 𝒜_2={2,3}. Let 𝒟^c = P_𝒜_1∪ P_𝒜_2 and 𝒟= P_[3]∖𝒟^c, then C_𝒟 constructed by (<ref>) is a 5-ary linear code with a generator matrix G=[ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1; 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 0 0 0 0; 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 ]. By Magma software, we know the parameters of C_𝒟 are [20, 3, 15]. By Theorem <ref>, we can partition the matrix G into the following four submatrices [ 1 1 1 1 1; 1 0 2 3 4; 1 1 1 1 1 ], [ 1 1 1 1 1; 1 0 2 3 4; 2 2 2 2 2 ], [ 1 1 1 1 1; 1 0 2 3 4; 3 3 3 3 3 ], [ 1 1 1 1 1; 1 0 2 3 4; 4 4 4 4 4 ]. In each submatrix, any two columns can be linearly combined to get the remaining three columns, from Definition <ref>, C_𝒟 is a (2,4)-LRC. It is easy to check that the codes C_𝒟 constructed in Theorem <ref> are also Singleton-optimal. This phenomenon reminds us that it may be an interesting topic to construct LRCs that achieve both Singleton-optimality and alphabet-optimality. §.§ (2,p)-LRCs In Theorems <ref> and <ref>, we utilize the inherent structure of defining sets to establish the (2,p-2) and (2,p-1)-localities of p-ary linear code 𝒞_𝒟, respectively. Now, we proceed to present a theorem that allows us to determine the (2,p)-locality of a p-ary linear code, where the defining set of this code is a subset of P_[m], solely based on the cardinality of its defining set. Let p≥2 be a prime, m > 2 an integer. Suppose that 𝒟 is a subset of P_[m], 𝒟^c= P_[m]∖𝒟. If |𝒟^c|< p^m-1-1/p-1, then 𝒞_𝒟 defined as in (<ref>) is a p-ary (2,p)-LRC . Let _p={0,-1,α_1,α_2,…, α_p-2}. We will show that for any nonzero 𝐠∈𝒟, there always exists a (p+1)-size set [𝐠,𝐡]:={𝐠,𝐡,[𝐡-𝐠],[𝐡+α_1𝐠],…, [𝐡+α_p-2𝐠]}⊂𝒟 for some 𝐡∈𝒟∖{𝐠}. As any 2 elements of the set [𝐠,𝐡]∖{𝐠} could be linearly combined to get 𝐠, we call [𝐠,𝐡]∖{𝐠} a repair set of 𝐠. For different 𝐡_i, 𝐡_j∈ P_[m]∖{𝐠}, it is easy to examine that [𝐠,𝐡_i]∩ [𝐠,𝐡_j]={𝐠}. So, there are p^m-1/p-1-1/p=p^m-1-1/p-1 disjoint repair sets of 𝐠 in P_[m]. Since |𝒟^c| < p^m-1-1/p-1, we have |𝒟|=|P_[m]|-|𝒟^c|> p^m-1> (p-1)p^m-1-1/p-1. According to the Pigeonhole principle, for any vector 𝐠∈𝒟, there always exists a repair set [𝐠,𝐡_0]∖{𝐠}⊂𝒟, where 𝐡_0∈𝒟 and 𝐡_0≠𝐠, which is equivalent to say that the coordinate occupied by 𝐠 has (2,p)-locality. Since the chosen of 𝐠 is arbitrary, all the coordinates of _𝒟 has (2,p)-locality. By combining Theorems <ref> and <ref>, we can provide a construction of p-ary alphabet-optimal (2,p)-LRCs. Let p≥ 3 be an odd prime, ℓ≥ 1 and m≥ 2 be integers. If 𝒜_1,𝒜_2, … ,𝒜_ℓ are nonempty subsets of [m] satisfying (i) 𝒜_1,𝒜_2, … ,𝒜_ℓ are mutually disjoint, (ii) M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1, and (iii) p^m-1>∑_i=1^ℓp^|𝒜_i|. If 𝒟= P_[m]∖ (⋃_i=1^ℓP_𝒜_i), 𝒟^c= P_[m]∖𝒟, then 𝒞_𝒟 constructed by (<ref>) is an alphabet-optimal (2,p)-LRC with parameters [p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1 ,m, p^m-1 -∑_i=1^ℓp^|𝒜_i|-1]. From (i) and (iii), |𝒟^c|=∑_i=1^ℓ(p^|𝒜_i|-1)/p-1< p^m-1-1/p-1. By Theorem <ref>, 𝒞_𝒟 has (2,p)-locality. From (i)-(ii) and Theorem <ref>, 𝒞_𝒟 is a p-ary [n,k,d] Griesmer code, where n=p^m-∑_i=1^ℓp^|𝒜_i|+ℓ-1/p-1, k=m, d=p^m-1 -∑_i=1^ℓp^|𝒜_i|-1. From (iii), we have ⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-2⌉=p. Note that n=∑_j=0^m-1⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉. Thus ∑_j=0^m-2⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉=n-1, ∑_j=0^m-3⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉= n-(p+1). Thanks to the Griesmer bound, k_ opt^(p)(n -(p+1), d) = m -2. Utilizing the bound of (<ref>) with t = 1, we get that k ≤ 2 + k_ opt^(p)(n-(p+1), d) = m. Therefore, the linear code 𝒞_𝒟 achieves the bound of (<ref>). From the proofs of Theorems <ref>, <ref> and <ref>, we can see that if we replace prime p with any prime power q, the statements of localities for p-ary codes can be generalized to q-ary codes without any difficulties. Simplex codes over any finite field 𝔽_q are alphabet-optimal (2,q)-LRCs with respect to the bound (<ref>). From Remark <ref>, we know that any q-ary Simplex code 𝒮_m has (2,q)-locality. The parameters of 𝒮_m are [n=q^m-1/q-1,k=m,d=q^m-1]. Utilizing the bound (<ref>) with t = 1, we get that k ≤ 2 + k_ opt^(q)(q^m-1/q-1-(q+1), q^m-1) (b)≤ m, where (b) is from the Plotkin bound. The proof is done. § ALPHABET-OPTIMAL (R,Δ)-LRCS WITH AVAILABILITY In this section, we will investigate the locality of the codes 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) constructed in Theorem <ref> with s≥ 1. In the absence of ambiguity, we call an [n,k,d]_q code alphabet-optimal if it achieves some upper bound for k which takes the alphabet size q into consideration. As we can see in Section <ref>, the set P_[m] is the beacon of proofs of (r,δ)-localities. When s> 1, there are s copies of P_[m] in the generator matrix of code 𝒞_(𝒟_1,𝒟_2,…,𝒟_s). Consequently, each coordinate of 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) may possess s disjoint repair sets. Next, we will give a formal definition of (r,δ)-locality with availability from the aspect of generator matrix. Let 𝒞 be a p-ary linear code with generator matrix G = [𝐠_1, …, 𝐠_n]. The i-th coordinate, 1 ≤ i ≤ n, of 𝒞 is said to have (r, δ)_t-locality if there exist t pairwise disjoint sets ℐ^(i)_1 ,…, ℐ^(i)_t, which are subsets of {𝐠_1, …, 𝐠_n}∖{𝐠_i}, satisfying that for each j ∈ [t], (1) |ℐ^(i)_j∪{𝐠_i}| ≤ r+δ-1, (2) any δ-1 vectors in ℐ^(i)_j∪{𝐠_i} are linear combinations of the remaining vectors in ℐ_j^(i)∪{𝐠_i}. If all the coordinates of 𝒞 have (r,δ)_t-locality, then 𝒞 is called an (r,δ)_t-locally repairable code, or in short, (r,δ)_t-LRC. From the above definition, a p-ary [n,k,d] code with (r, δ)_t-locality is also a code with (r, δ)_i-locality, 1≤ i ≤ t-1. So, for a linear code with (r, δ)_t-locality, if it is (r, δ)-alphabet-optimal, then it is also (r, δ)_t-alphabet-optimal. From Remark <ref>, for an (r, δ)_t-LRC, where t≥ 1, we can prove that it is (r, δ)_t-alphabet-optimal by proving that it is (r, δ)-alphabet-optimal. Let the notation be the same as in Theorem <ref>. If ℬ_1^(j),ℬ_2^(j),…,ℬ_ℓ_j^(j) are mutually disjoint for every j∈ [s], M(𝒜_1,𝒜_2,…,𝒜_ℓ) ≤ p-1, and ⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-1⌉< p, then 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is an alphabet-optimal (2,p-1)_s-LRC with parameters [sp^m-∑_i=1^ℓp^|𝒜_i|+ℓ-s/p-1 ,m, sp^m-1 -∑_i=1^ℓp^|𝒜_i|-1]. From Theorem <ref>, 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) defined by (<ref>) is a p-ary [n,k,d] Griesmer code, where n=sp^m-∑_i=1^ℓp^|𝒜_i|+ℓ-s/p-1, k=m, d=sp^m-1 -∑_i=1^ℓp^|𝒜_i|-1. It is evident that all the ℬ_j^(i), where 1≤ j≤ℓ_i and 1≤ i≤ s, are proper subsets of [m], so P_[m]⊂𝒟_r for all 1≤ r≤ s. Then from Lemma <ref> and Definition <ref>, coordinates occupied by 𝒟_i∖P_[m] have (2,p-1)_s-locality, where 1≤ i≤ s. From the proof of Theorem <ref>, for each i∈ [s], there exists an 𝒜^(i)⊂ [m] with size m-1 such that 𝒜^(i)≠ℬ^(i)_j for all j∈ [ℓ_i]. Then from Theorem <ref>, for any 𝐠 in P_[m], we can find a 𝐡_i∈ P_𝒜^(i)⊂𝒟_i∖P_[m] such that there is a p-size set {𝐠,𝐡_i,𝐡_i+α_1𝐠,…,𝐡_i+α_p-2𝐠}⊂𝒟_i for all 1≤ i≤ s, where 𝔽_p={0,-1,α_1,α_2,…,α_p-2}. From Theorem <ref> and Definition <ref>, coordinates occupied by P_[m] have (2,p-1)_s-locality. In summary, 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is a (2,p-1)_s-LRC, and of course a (2,p-1)-LRC. Note that n=∑_j=0^m-1⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉, from (<ref>), we have ∑_j=0^m-2⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉>n-p. Since ℬ_1^(j),ℬ_2^(j),…,ℬ_ℓ_j^(j) are mutually disjoint for every j∈ [s], we can deduce that ⌈sp^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^m-2⌉≥ s(p-1), so ∑_j=0^m-3⌈p^m-1-∑_i=1^ℓp^|𝒜_i|-1/p^j⌉≤ n-1-s(p-1)≤ n-p. Thanks to the Griesmer bound, k_ opt^(p)(n -p, d) = m -2. Utilizing the bound (<ref>) with t = 1, we can derive that k ≤ 2 + k_ opt^(p)(n-p, d) = m, which means that 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is alphabet-optimal with respect to (2,p-1)-locality. From Remark <ref>, 𝒞_(𝒟_1,𝒟_2,…,𝒟_s) is also alphabet-optimal with respect to (2,p-1)_s-locality. From the above theorem, the key of constructing alphabet-optimal (r, δ)_s-LRCs is to make (<ref>) hold, which can be fulfilled when s is small, for example, let s< p. § CONCLUSION In this paper, we first proposed a construction of linear codes 𝒞_(𝒟_1,…𝒟_s) over 𝔽_p by generalizing the constructions in <cit.>. Similarly with <cit.>, a necessary and sufficient condition for the linear codes 𝒞_(𝒟_1,…𝒟_s) to be Griesmer codes, a sufficient condition for 𝒞_(𝒟_1,…𝒟_s) to be distance-optimal, were presented. From which, some new constructions of Griesmer codes and distance-optimal codes can be derived. Secondly, we proposed criteria for determining the (2,p-2), (2,p-1), and (2,p)-localities of p-ary linear codes constructed by eliminating elements from a complete projective space, and some alphabet-optimal (2,p-1)-LRCs and (2,p)-LRCs were provided. Specially, by showing that the methods of determining (r,δ)-localities of p-ary code can be generalized to q-ary codes for any prime power p, we proved that the q-ary Simplex codes are alphabet-optimal (2,q)-LRCs. Finally, we explored the availability of (r,δ)-LRCs constructed from the generalized framework (<ref>) with an alphabet-optimal construction. In the following research work, we plan to explore more (r,δ)-localities of linear codes constructed from (<ref>) and (<ref>) and to propose more alphabet-optimal (r,δ)-LRCs. 99 Cadambe2015 Cadambe V. R., Mazumdar A.: Bounds on the size of locally recoverable codes. IEEE Trans. Inf. Theory 61(11), 5787–5794 (2015). Cai2020-ava Cai H., Miao Y., Schwartz M., Tang X.: On optimal locally repairable codes with multiple disjoint repair sets. IEEE Trans. Inf. Theory 66(4), 2402–2416 (2020). Cai2020 Cai H., Miao Y., Schwartz M., Tang X.: On optimal locally repairable codes with super-linear length. IEEE Trans. Inf. Theory 66(8), 4853–4868 (2020). Chen2018 Chen B., Xia S.-T., Hao J., Fu F.-W.: Constructions of optimal cyclic (r,δ) locally repairable codes. IEEE Trans. Inf. Theory 64(4), 2499–2511 (2018). Chen2019 Chen B., Fang W., Xia S.-T., Fu F.-W.: Constructions of optimal (r,δ) locally repairable codes via constacyclic codes. IEEE Trans. Commun. 67(8), 5253–5263 (2019). Chen2021it Chen B., Fang W., Xia S.-T., Hao J. and Fu F.-W.: Improved bounds and Singleton-optimal constructions of locally repairable codes with minimum distance 5 and 6. IEEE Trans. Inf. Theory 67(1), 217–231 (2021). Ding2007 Ding C., Niederreiter H.: Cyclotomic linear codes of order 3. IEEE Trans. Inf. Theory 53(6), 2274–2277 (2007). Ding2008 Ding C., Luo J., Niederreiter H.: Two-weight codes punctured from irreducible cyclic codes. Ser. Coding Theory Cryptol. 4, 119–124 (2008). Fang2018 Fang W., Fu F.-W.: Optimal cyclic (r, δ) locally repairable codes with unbounded length. Finite Fields Appl. 63, 101650 (2020). Fang2021 Fang W., Chen B., Xia S.-T., Fu F.-W.: Singleton-optimal LRCs and perfect LRCs via cyclic codes. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 3261–3266 (2021). Fu2020 Fu Q., Li R., Yang S.: Optimal (r,δ)-locally repairable codes from Simplex Code and Cap code. IEEE Access 8, 215414–215418 (2020). Gopalan2012 Gopalan P., Huang C., Simitci H., Yekhanin S.: On the locality of codeword symbols. IEEE Trans. Inf. Theory 58(11), 6925–6934 (2012). Goparaju2014 Goparaju S., Calderbank R.: Binary cyclic codes that are locally repairable. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 676–680 (2014). Griesmer1960 Griesmer J.H.: A bound for error-correcting codes. IBM J. Res. Dev. 4(5), 532–542 (1960). Guruswami2019 Guruswami V., Xing C., Yuan C.: How long can optimal locally repairable codes be? IEEE Trans. Inf. Theory 65(6), 3662–3670 (2019). Huang2016 Huang P., Yaakobi E., Uchikawa H., Siegel P.: Binary linear locally repairable codes. IEEE Trans. Inf. Theory 62(11), 6268–6283 (2016). Hyun2020 Hyun J.Y., Lee J., Lee Y.: Infinite families of optimal linear codes constructed from simplicial complexes. IEEE Trans. Inf. Theory 66(11), 6762–6773 (2020). Jin2019 Jin L.: Explicit construction of optimal locally recoverable codes of distance 5 and 6 via binary constant weight codes. IEEE Trans. Inf. Theory 65(8), 4658–4663 (2019). Jin2020 Jin L., Ma L., Xing C.: Construction of optimal locally repairable codes via automorphism groups of rational function fields. IEEE Trans. Inf. Theory 66(1), 210–221 (2020). Kong2021 Kong X., Wang X., Ge G.: New constructions of optimal locally repairable codes with super-linear length. IEEE Trans. Inf. Theory 67(10), 6491–6506 (2021). Li2019 Li X., Ma L., Xing C.: Optimal locally repairable codes via elliptic curves. IEEE Trans. Inf. Theory 65(1), 108–117 (2019). Luo2019 Luo Y., Xing C., Yuan C.: Optimal locally repairable codes of distance 3 and 4 via cyclic codes. IEEE Trans. Inf. Theory 65(2), 1048–1053 (2019). Luo2021 Luo G., Cao X.: Constructions of optimal binary locally recoverable codes via a general construction of linear codes. IEEE Trans. Commun. 69(8), 4987–4997 (2021). Luo2022 Luo G., Ling S.: Application of optimal p-ary linear codes to alphabet-optimal locally repairable codes. Des. Codes Cryptogr. 90, 1271–1287 (2022). Ma2019 Ma J., Ge G.: Optimal binary linear locally repairable codes with disjoint repair groups. SIAM J. Discret. Math. 33(4), 2509–2529 (2019). Pamies2013 Pamies-Juarez L., Hollmann H.D., Oggier F.: Locally repairable codes with multiple repair alternatives. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 892–896 (2013). Prakash2012 Prakash N., Kamath G. M., Lalitha V., Kumar P. V.: Optimal linear codes with a local-error-correction property. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 2776–2780 (2012). Qiu2021 Qiu J., Zheng D., Fu F.-W.: New constructions of optimal cyclic (r, δ) locally repairable codes from their zeros. IEEE Trans. Inf. Theory 67(3), 1596–1608 (2021). Rawat2015 Rawat A.S., Mazumdar A., Vishwanath, S.: Cooperative local repair in distributed storage. EURASIP J. Adv. Signal Process. 2015, 107 (2015). Rawat2016 Rawat A. S., Papailopoulos D. S., Dimakis A. G., Vishwanath S.: Locality and availability in distributed storage. IEEE Trans. Inf. Theory 62(8), 4481–4493 (2016). Silberstein2015 Silberstein N., Zeh A.: Optimal binary locally repairable codes via anticodes. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 1247–1251 (2015). Silberstein2018 Silberstein N., Zeh A.: Anticode-based locally repairable codes with high availability. Des. Codes Crypt. 86, 419–445 (2018). Silberstein2019 Silberstein N., Etzion T., Schwartz M.: Locality and availability of array codes constructed from subspaces. IEEE Trans. Inf. Theory 65(5), 2648–2660 (2019). Solomon1965 Solomon G., Stiffler J.J.: Algebraically punctured cyclic codes. Inf. Control 8(2), 170–179 (1965). Sun2019 Sun Z., Zhu S., Wang L.: Optimal constacyclic locally repairable codes, IEEE Commun. Lett. 23(2), 206–209 (2019). Tamo2014 Tamo I., Barg A.: A family of optimal locally recoverable codes. IEEE Trans. Inf. Theory, 60(8), 4661–4676 (2014). Tamo2015 Tamo I., Barg A., Goparaju S., Calderbank R.: Cyclic LRC codes and their subfield subcodes. In: Proc. IEEE Int. Symp. Inf. Theory, pp. 1262–1266 (2015). Tamo2016 Tamo I., Barg A., Goparaju S., Calderbank R.: Cyclic LRC codes, binary LRC codes, and upper bounds on the distance of cyclic codes. Int. J. Inf. Coding Theory 3(4), 345–364 (2016). Tamo2016-ava Tamo I., Barg A., Frolov A.: Bounds on the parameters of locally recoverable codes. IEEE Trans. Inf. Theory 62(6), 3070–3083 (2016). Tan2021 Tan P., Fan C., Ding C., Tang C., Zhou Z.: The minimum locality of linear codes. Des. Codes Cryptogr. 91, 83–114 (2023). Wang2014 Wang A., Zhang Z.: Repair locality with multiple erasure tolerance. IEEE Trans. Inf. Theory 60(11), 6979–6987 (2014). Xing2019 Xing C., Yuan C.: Construction of optimal (r, δ)-locally recoverable codes and connection with graph theory. IEEE Trans. Inf. Theory 68(7), 4320–4328 (2022).
http://arxiv.org/abs/2307.06149v1
20230712131147
Emerging Physics Education Researchers' Growth in Professional Agency: Case Study
[ "Shams El-Adawy", "Scott V. Franklin", "Eleanor C. Sayre" ]
physics.ed-ph
[ "physics.ed-ph" ]
APS/123-QED Department of Physics, Kansas State University, Manhattan, Kansas, USA School of Physics and Astronomy, Rochester Institute of Technology, Rochester, New York, USA Department of Physics, Kansas State University, Manhattan, Kansas, USA Center for Advancing Scholarship to Transform Learning, Rochester Institute of Technology, Rochester, New York, USA Improving the physics enterprise to broaden participation in physics is one of the main goals of the physics education research community. Many classically trained physics faculty transition during their faculty career into engaging in research investigating the teaching and learning of their discipline. There is scarce research on the support and needs of these faculty as they engage in their first projects in this new research field for them. We investigate agency growth of two emerging physics education researchers and one emerging mathematics education researcher as they participate in a professional development program. We ground our case study analysis of interview data in a theoretical framework on agency. We identify the elements of the professional development program that were transformative in our case study participants ’ trajectory in education research. Receiving get-started information, building mechanisms to sustain research projects and engaging with a supportive community help participants transform general interests to specific questions, articulate concrete next-steps and increase their sense of self-efficacy. During this professional development program all three case study participants gain agency in this new area of research for them. These identified program elements that affect agency growth can inform professional development opportunities for faculty transitioning into discipline-based education research, which expands our understanding of how to build capacity in the field. Emerging Physics Education Researchers' Growth in Professional Agency: Case Study Eleanor C. Sayre August 12, 2023 ================================================================================= § INTRODUCTION Physics education research (PER), a discipline-specific case of discipline-based education research (DBER), studies the human aspects of doing physics and aims to improve the teaching, learning, and inclusion of physics as a scientific field and human endeavor. More generally, DBER “investigates learning and teaching in a discipline using a range of methods with deep grounding in the disciplines' priorities, worldview, knowledge and practices"<cit.>. Because PER is physics, people who do PER are physicists <cit.>, and thus studies of their learning, inclusion, and professional growth fall under the purview of PER. How do people learn to do PER? We are particularly interested in how classically-trained physicists in university faculty positions learn to engage in PER and develop their first research projects in this new-to-them subfield. This is an important topic for understanding how our field grows and develops, alongside complementary studies of how graduate students become physics education researchers<cit.> and how our field has grown historically<cit.>. Studies of STEM faculty who engage in DBER have generally focused on Science Faculty with Education Specialities (SFES) <cit.>. SFES refer to university science faculty who not only engage in research in science education, but also in other science education initiatives such as preparing future science teachers and course or curriculum development <cit.>. Most studies on SFES have focused on individual disciplines efforts in biology, chemistry, geoscience, mathematics, engineering and physics departments, rather than across STEM disciplines <cit.>. The literature on SFES faculty has reported on differences in the origin of SFES positions based on the type of institution the faculty are: at PhD-granting institutions, SFES are often hired to relieve other faculty from their teaching load; at MS-granting institutions, SFES are often hired to train future K-12 science teachers; and at primary undergraduate institutions (PUI), SFES often transition to their role after their hire to fulfill a need in their department <cit.>. The background and training of SFES faculty are varied and have been changing over the last decade, generally mimicking growth trends in PER over the last four decades <cit.>. The professional backgrounds of SFES can be divided into two rough groups. One group, which is becoming more prevalent as graduate schools specifically prepare PhDs in DBER, have formal training in DBER via their PhD or postdoctoral training <cit.>. A second group were formally trained outside of DBER and transitioned to research on teaching and learning as part of their faculty appointment, either upon hire or years afterwards <cit.>. This second group of SFES are sometimes referred to as “boundary crossers” since they want to become more scholarly about teaching and learning, crossing the boundary between another subfield and education research. Some maintain active research presence in both areas simultaneously, while others devote their entire scholarly effort to education research <cit.>. In either case, these SFES who begin to engage in education research as part of their faculty appointment are a group of emerging education researchers with limited formal opportunities to train in their new scholarly field. There is recent research studying challenges STEM education researchers face in finding community in DBER when those STEM faculty are trained in their specific discipline but have not been formally trained in DBER <cit.>. There is also research that identifies the barriers these STEM faculty may face with other interdisciplinary education research such as the learning sciences community <cit.>. More research is needed on the type of support they need when entering the field and their growth as emerging STEM education researchers. In this work, it is important to center the connections among each emerging researcher's motivations in engaging in DBER, the constructs that are most relevant to each of them, and the experiences that sparked their interest in STEM education research. In this paper, we investigate emerging physics and math education faculty’s transition to DBER to examine the mechanisms by which we can best support them. We address the following research question: How do STEM faculty gain agency during the process of engaging in education research? Exploring this question will allow us to better understand the ways in which DBER can be conducted in even more diverse instructional settings and institutions to improve STEM education. We explore this question through a multiple case study analysis of three participants in a professional development program, tracking how program activities affect their agency as researchers. § FACULTY AGENCY IN STEM We see faculty as agentic individuals who make changes to their teaching and adopt new teaching principles, building on their deep expertise in their institutional context as well as their pedagogical and physics knowledge. This agentic lens has become a more common lens by which we investigate faculty professional development. This agentic perspective has been proven to provide valuable insight around their teaching because it highlights the strengths they bring to their teaching as both content and context experts <cit.>. The terms “agency” and “professional agency” have often been used interchangeably in DBER when investigating STEM faculty’s agency in instructional change. In particular, agency has often been examined from a particular lens: how individuals have agency within the power and constraints of systems they engage with and how they exert their agency <cit.>. Less research has focused on faculty's perception of their agency. In this section, we look at how agency has been studied in DBER, particularly in different STEM education fields and how we focus on perception of agency development in this study. In engineering education, professional agency growth and identity negotiation have been conceptualized in the context of instructional change to examine the enactment of professional agency of instructors given their individual resources and social conditions <cit.>. Du et al. showed how instructors leveraged resources and social conditions to develop their agency and strategies to overcome pedagogical challenges they faced. In mathematics education, faculty’s professional agency has been studied in community colleges classrooms, where it was shown that part-time faculty are less agentive than full-time faculty in taking instructional decisions <cit.>. Other studies focused on science and math instructors' professional development showed that teacher agency is a key feature in the successful implementation of professional learning communities <cit.>. In physics education, most research on agency and faculty has focused on different facets of professional agency enactment or growth within the context of teaching. Strubbe et al. use key parts of Bandura’s work to develop an analytical framework of faculty agency. They characterize key features of physics faculty agency around their teaching to demonstrate the value of agentic-based perspectives in highlighting faculty's productive ideas they have about student learning and teaching <cit.>. This research argues that researchers and educators should support faculty agency in their teaching, however, it does not provide the mechanisms by which we can facilitate that support. Other research in physics education around agency in teaching include work on the Departmental Action Teams (DAT), the Faculty Online Learning Communities (FOLC), and the Physics and Astronomy New Faculty workshops (NFW), recently renamed the Faculty Teaching Institute (FTI) <cit.>. DAT are externally facilitated groups consisting of faculty, students, and staff focusing on creating sustainable departmental change <cit.>. The research team behind the DAT initiative highlights the value of having agency when they select an educational issue they will address <cit.>. Nevertheless, DAT explicitly encourages hiring external facilitators for discussions on educational change, which can diminish faculty agency in leading change initiatives in their departments. FOLC are professional development communities built to support instructors in using research based teaching practices <cit.>. Researchers engaging in studying FOLC have identified possible mechanisms to support physics faculty’s agency in teaching. This included focusing on their productive ideas to allow for deep reflection among faculty who participate in these conversations about teaching and educational change <cit.>. While FOLC are often lead by facilitators, those facilitators are often drawn from participant near-peers, such as other faculty at similar institutions or former participants. NFW (renamed recently as Physics and Astronomy Faculty Teaching Institute (FTI) <cit.>) are workshops that aim to improve physics teaching by introducing new physics and astronomy faculty to research-based instructional strategies <cit.>. In the research around the impact of NFW, researchers underlined the value of creating professional development experiences for faculty by helping develop their agency <cit.>. However, NFW has historically encouraged faculty to use the results and findings of physics education researchers in their classroom instead of engaging in doing research or curriculum development themselves, which directly depresses physics faculty’s agency in instructional change. Each of these programs and initiatives address particular physics faculty needs when it comes to instructional and departmental changes: multi-layer facilitation of department instructional change (DAT), community building and peer support in implementing research-based practices (FOLC) and advocacy in using research-based practices (NFW). The research on these program highlights the value of agentic perspectives when examining STEM faculty instructional change. These programs address the faculty as instructors or as department members who perform service, but they do not address – and in some cases, specifically exclude – faculty acting as researchers. Less research has focused on faculty interested in doing scholarly work on learning and teaching rather than solely implementation of research results. Hence, we fill a gap in the literature by investigating the perception of agency growth of physics and math researchers transitioning into education research. For STEM researchers who have not received extensive training in discipline-based education research, we have minimal evidence of their experiences as they transition into DBER. In particular, our context draws participants from multiple institutions (similarly to NFW) and promotes community development among participants with shared goals (similar to FOLCs). Our case study participants engage in program activities with the intent to bring their growing skills and research projects back to their home institutions. § THEORETICAL FRAMEWORK: AGENCY The literature on student and faculty agency is extensive. One of the most commonly used frameworks for studying agency of both student and faculty comes from social cognitive theory. From this lens, human agency theory of development, adaptation and change adopts the view that individuals are products of the interplay among interactions of environment, behavior and self <cit.>. Bandura’s agency lens informs part of the subject-oriented socio-cultural approach to agency growth which posits that the subject of focus is an agentic actor in relation to the social world and agency is temporally constructed within engagements with different tasks <cit.>. Bandura’s work lays some of the foundation for how the definition of professional agency came to be. According to Etelapelto, “professional agency is practiced when professional subjects and/or communities exert influence, make choices and take stances in ways that affect their work and/or their professional identities” <cit.>. The concept of professional agency focuses on agency in one’s career and has been studied in the context of instructional change and teacher education. The four main areas are professional development, education policies, teacher identity development and social justice <cit.>. These topics overlap and interact with each other as they constitute the major factors that provide affordances or constraints in university faculty’s professional agency growth. In this paper, we use Bandura's agency framework with a focus on professional agency examining how participants perceive their agency development during their engagement in a professional development program as they transition into DBER. In other words, within the context of professional development, we use Bandura's definition of agency as an individual’s ability to make choices and take action based on intentionality, forethought, self-reactiveness and self-reflectiveness <cit.> to ground our analysis of STEM faculty who transition to DBER. Table <ref> summarizes the definition of the components from Bandura’s framework and how they are operationalized within the context of this study. Intentionality refers to planning by setting measurable steps for the short or long term to achieve specific goals. When engaging in any research project, deliberate thinking and mapping of the research directions is a common practice. Research has shown that intentionality plays a critical role in mentorship, especially in experiential learning experiences <cit.>. So when engaging in a first DBER research project, articulating intent and the scale of engagement can help solidify research directions. In our context, we are using it to get a deeper understanding of what emerging STEM education researchers are aiming for by engaging in DBER and how their aims and hopes evolve as they engage in an experiential learning experience. Forethought, the process of defining specific tasks and their potential impact on target goals, is an integral part of the research process. Forethought allows us to articulate deeper details that provide insight into intentionality. Researchers show how critical the task analysis/strategic planning phase of forethought plays in self-regulating behaviors and motivation in the short and long term <cit.>. In our context, DBER tasks planning allows us to get a deeper understanding of how emerging STEM education researchers plan to engage in DBER. It provides more details than intentionality and connects with the other components (self-reactiveness and self-reflectiveness). By exploring emerging STEM education researchers' forethought, we can identify what tasks and behaviors they envision needing most to support them in their DBER research project. Self-reactiveness is the motivation and self-regulation needed to execute a planned action. In our context, we are particularly interested in emerging STEM education reserachers’ intrinsic motivation to DBER. We are drawing on Self-Determination Theory (SDT), a theory about motivation that centers around a learner’s agency when making choices to reach desired goals <cit.> to examine, justify and interpret their development. SDT suggests that three psychological needs: competence, relatedness and autonomy have to be satisfied to have the most self-determined form of motivation <cit.>. In our context competence refers to the need to feel proficient in engaging in DBER. Relatedness refers to the need to feel connected to the DBER community, the people and the research products value. Autonomy refers to the need to have a sense of choice in behavior and tasks that drive their engagement in DBER. Although we are most interested in understanding intrinsic motivation, SDT draws us to also examine the role of extrinsic motivation, especially how external factors can regulate behavior. These emerging STEM education researchers are engaging in DBER while holding other responsibilities within their respective institutions, which inevitably plays a role in how they view their role in DBER. As such, understanding DBER engagement through the interplay of these regulatory factors can provide insight into their motivation to do DBER and how it evolves as they engage with a DBER professional development program. Self-reflectiveness refers to a “self reflective belief in one's ability to succeed”. In our context, DBER self-reflectiveness allows us to gain insight into emerging STEM education perceived abilities to do research, which can help us understand what support is needed. When looking at self-reflectiveness, Bandura draws our attention to self-efficacy, which is defined as one’s perceived skill and competence in their ability to undertake a behavior <cit.>. Understanding emerging DBER self-efficacy can inform support of what professional development needs would be most beneficial. Bandura’s work investigating the relation between agency and self-efficacy underlines how changes in self-efficacy have a direct and critical impact on agency, where increasing self-efficacy is a necessary condition for increasing agency, whereas increasing other aspects of agency does not necessarily entail an increase in self-efficacy <cit.>. Bandura’s theory on self-efficacy also suggests four different sources that contribute to a person’s perceived self-efficacy: mastery experiences, vicarious learning, verbal persuasion, and physiological states <cit.>. In our context, mastery experiences refer to experiences that provide information about personal successes or failures in task similar to the new DBER experience they are engaging in and influence emerging STEM education researchers’ confidence in their ability to perform a DBER related task. Vicarious learning refers to learning that occurs by observing others performing DBER either by observing how they are engaging in DBER or how they compare their DBER work with others. Verbal persuasion is related to messages received about their ability to do DBER conveyed through interactions with the DBER community. Physiological state refers to emotional indicators that an emerging STEM education researcher may rely on when evaluating their ability to do DBER. These four elements of agency: intentionality, forethought, self-reactiveness and self-reflectiveness, are interrelated and combined they provide us with a valuable and holistic lens to study the factors that impact STEM faculty as they transition into DBER. § DATA The backdrop for this research study is a professional development program, Professional development for Emerging Education Researchers (PEER), designed to help faculty, postdocs and graduate students jumpstart their transition into the world of discipline-based education research <cit.>. The central activity of PEER is a series of workshops to help participants design and conduct research projects; engage in targeted experiential work to develop their projects and skills; and collaborate and form a support community of peers, mentors and collaborators. Conducting this study within the context of PEER was advantageous for two main reasons: firstly, our data collection was grounded in participants' experiences with the program; secondly, most participants were leading their first research projects in DBER so it was an opportune time to examine the mechanisms that best support emerging STEM education researchers. For this particular study, the data stems from participants in one of the virtual editions of the PEER program, primarily drawing from a national audience of emergent mathematics and physics education researchers. This field school occurred over Zoom through spring 2021 and was attended by 45 emerging mathematics and physics education researchers from a variety of research and teaching institutions across the US. The workshop consisted of a kickoff session, three two-hour sessions spread over 6 weeks, and then a three-day intensive at the end of June. Table <ref> lists goals and activities of each set of workshop sessions during PEER. As can be seen in the table, PEER is professional development program expecting active engagement from participants where collaboration, responsivity and group work is embedded in all aspects of the program. At the end of each session, participants were asked to list their questions they had and topics they wanted to learn more about. The following sessions incorporated these questions and interest participants had. Participants were solicited for semi-structured interviews before and after participation. Semi-structured interviews are a common tool for data collection in qualitative research, which uses a series of open-ended questions allowing for emerging themes in the discussion to be explored <cit.>. Thirteen participants were interviewed pre-participation and eleven participants were interviewed in the post-interviews. Only five participants took part in both pre and post interviews (for a total of nineteen participants). § METHODS A case study as a methodological approach focuses on developing an in-depth analysis of a case or cases to capture the complexity of the unit of analysis <cit.>. When doing a multiple case study, the researcher selects a few case study participants to illustrate the unit of analysis, which is agency. In practice, the case study analysis was conducted as follows. The first author conducted a preliminary analysis of the interview data, which highlighted participants who addressed the unit of analysis. Three of the five participants who took part in both pre and post interviews were chosen using a purposeful sampling process, a common way of selecting cases in qualitative research <cit.>. Our case study participants addressed components of agency and were at different stages of their faculty career as they started to engage in their first education research project in their discipline. Given that these faculty were at a similar stage of engagement with DBER but at different stages of their professional lives, they allow us to investigate variation and similarities in agency growth in education research for faculty with different teaching and research experiences. After selection, the first author provided a detailed description of themes for each case study participant grounded in Bandura’s agency framework. This thematic analysis was expanded to compare and contrast themes across cases to identify the key elements relating to agency growth. After the themes across cases were characterized in the theoretical framework, the first author wrote an initial analysis of the case studies. Then, members of the research team reviewed the analysis together. The instances where disagreement was identified, a discussion followed until agreement was met. This process often resulted in the first author reviewing the interview transcripts to provide more evidence for their interpretation with the relevant pieces of data. § CASE STUDY PARTICIPANTS Our three case study participants are given the pseudonyms Olivia, Madison and Akemi. During their participation in this study, Madison and Akemi are leading their first physics education research (PER) project, whereas Olivia is leading her first math education research (MER) project. As they are chairing the first project in discipline-based education research, all three case study participants identified a common need to gain practical skills and a better sense of what the field of DBER was. These beliefs and motivations were a strong reason for their participation in a professional development program such as PEER. Olivia is a Full Professor in a mathematics department at a public land grant university. She has been in her current mathematics departments for over twenty years teaching introductory and upper-level mathematics courses. Her graduate training is in mathematics and her current primary research area is in graph theory. During her participation in this study, she was exploring mathematics education research to help make evidence based instructional changes in her classroom and institution. In the course of her engagement with PEER, Olivia focused on developing her existing research project and developed an understanding of where she can situate herself in mathematics education research (MER). Madison is an Associate Professor in a physics department at a primarily undergraduate institution. She teaches many of the undergraduate physics courses, but she is especially passionate about instructional laboratory teaching in physics. Her graduate training is in physics and her current primary research area is in condensed matter physics. During her participation in this study, she started exploring physics education research to inform and assess her work redesigning instructional labs in her department, which included facilitating a departmental faculty learning community. In the course of her engagement with PEER, Madison focused on narrowing down her research questions and getting started on writing an NSF grant to fund her physics education research (PER) project. Akemi is a Visiting Faculty Member in a physics department at a private liberal arts college during the pre-interviews and a high school science teacher in the post-interview. She was teaching a few introductory undergraduate physics courses and was about to teach high school science. Her graduate training is in physics with a research focus in condensed matter physics. As an early-career scientist during her participation in this study, she was engaging in physics education research to make evidence-based instructional decisions in the classroom as well as build her research portfolio for her career advancement. In the course of her engagement with PEER, Akemi focused on refining her research project and getting started with data collection. § ANALYSIS In the following section, we discuss our data analysis within each component of Bandura's framework for each case study participant. We present each participant's pre-PEER status and post PEER status for each component of the framework. §.§ Intentionality Intentionality in the literature is defined as the planning for specific actions for the short or long terms to achieve goals. In our context, intentionality refers to what emerging STEM education researchers plan to do or accomplish with their first DBER research projects. Features of intentionality were brought up by participants pre and post-PEER, highlighting the alignment of short and long-term plans with motivation. Project mapping at PEER was the central common activity contributing to intentionality. Table <ref> summarizes the status of intentionality for each participant. §.§.§ Pre-PEER Intentionality Olivia’s short term plan is to complete her MER project about a new pedagogical strategy they are implementing in their calculus courses, which is specific to her institutional context and issues they are having about introductory math courses: We are basically open admissions, which means we do get a lot of students who are first generation, low income, and so have nonmathematical readiness issues, [...] so looking at whether or not taking those students with really low prerequisite skills and putting them in a class that’s going to provide them with this corequisite support over the course of calculus, that is, to not ask them to drop back to precalculus but keep them in calculus with a little extra support. And we’re going to measure whether or not these students with low scores look like they can be successful in calculus. By providing background information about the type of institution she is at, Olivia sets the stage for her intention she articulates, which is measuring the impact of providing additional support to students in calculus instead of dropping them to pre-calculus courses. Her long term plan is to keep finding ways to measure how to make things better for students in math courses at her institution by investigating different instructional change strategies. Her intentions to engage in MER is to improve passing and retention rates in math courses. She is supported by her institutional context where she hopes to implement effective instructional strategies. Comparable to Olivia, Madison’s short term plan is to go through and complete at least one iteration of the research design of a PER project in order to be able to write and submit a grant proposal: I would just really love to come out with some type of completed product of, even if it’s like a draft or a logic module or, you know, questionnaires that I can send out. Start getting some concrete documentation and things to prepare for grant submission for this. PER also fits into Madison’s long-term career trajectory, where as a tenured professor, she hopes to incorporate PER in her research portfolio: Basically, this is my first year as a tenured associate professor, so a question is always what do I do now? What is going to be my next big thing to get me from associate to full professor? And I have my materials research, and I’ll continue to do that, but I really like the idea of kind of adding on to what I do as a way to get myself to that next big step in my career. Likewise we have similar factors as part of Akemi’s intentions in doing DBER. On the short-term scale, Akemi hopes to complete her current PER project to have a DBER project to discuss when applying for more permanent jobs: I have a major goal is kind of find what improves or decreases my students’ self-efficacy [...]I’m a visiting professor, so I’m not required to do research, but I know I have to do research to get a better job. On the long term scale, PER fits into her career prospects as she is looking for jobs that will require her to do research. She is considering faculty positions and public engagement positions where doing research on instructional change and best practices would be a core component. Doing and completing a STEM education research project will provide evidence of her expertise when she applies to more permanent positions, which will aid her career advancement. Pre-PEER in intentionality, we identified that on the long term scale Madison and Akemi wanted PER to become a major component of their research portfolio. Whereas Olivia wanted to keep being involved in improving teaching at her institution and sees that engaging in MER will allow her to do so. To reach those long-term goals, all three case study participants wanted to complete at least an iteration of the research design process. §.§.§ Post-PEER Intentionality The process of project mapping helped Olivia identify specific goals and actions that needed to occur and set intentions for the short term, which include submitting a paper and attending an upcoming MER conference: It both helped me make specific plans to submit a paper, clarify a new research question that is both new to me but also a new kind of skill set I need to address it [...] it helped me clarify what it is, is sort of my more recent research project, and it’s also helped me produce a more specific set of future plans. So, you know, submitting a particular paper, attending a particular conference, that sort of thing. In the long term, she plans to continue asking similar questions about improving math education. She plans to remain intellectually engaged in MER by creating master’s students research projects with data analysis relevant to her research: I’m probably kind of committed to a series of short-term plans for now. The other thing that I’m also in a really advantageous position is that my department has a statistics master’s program [...] If I can produce data from our institutional database, I can implicitly produce a statistics project for a master’s student who needs a statistics project, and so that also helps me kind of keep thinking about some of my questions. We see in the long-term plans, there is no significant difference in her intentions, but we can notice that project mapping helped refine conceptualization of short term plans. For Madison, her short and long term plans have not changed. However, she articulates more concrete steps in achieving those goals, which came up when she discussed the planning for next steps that happened at PEER. She wants to apply for a PER grant: I’m hoping over the next six months to work on a grant and that’s going to be various steps. I want to start actually just making like visuals for it just to help me process like what is the flow of the project, like the logic models and stuff. And yeah, the big goal for me is one of the like NSF education grants. Moreover, she still wants to include PER as a main area of research portfolio and is considering dedicating her sabbatical to this endeavor: Oh, the other like longer, longer term planning thing as well is part of the grant is my kind of strategic career plan. I got a sabbatical I could take at some point, so I would really like to have the money to take like a full year sabbatical to really focus on the education research side. For Akemi, her short term plan is more specific compared to the pre-interviews. Taking the time to map out her research projects at PEER enabled her to realize that she is at a stage where she is trying to find a good journal home. She is considering submission for a short conference paper and a longer journal paper: I can try a two-page proposal to the International Conference of Learning Science, I believe. That’s-[...]So try that and then see how it goes, and then after that, I can try something like PRPER [Physical Review-Physics Education Research], writing a 15-to-20-page stuff. So that’s my goal, so I’m trying to put that two-page thing, and then see what I missed. Her long term plan is still related to her job prospects but she switched positions during PEER. She is now a high school science teacher and may consider keeping her current position, but it is unclear where PER will fit into that: So I think the thing I’m imagining is more like I collaborate with someone else, and they probably teach at university or college and then I might… Then, I don’t know, if I’m researching on my own students, I don’t think the IRB will review that. I’m interested in that, but I don’t know a way to research on high school students. Post-PEER intentions in the short and long terms to improve teaching practices by doing DBER remained the same for Olivia, Madison and Akemi. Nevertheless, they had a more defined trajectory on how they will keep engaged in DBER work post-PEER, especially in their short term planning. The most significant PEER activity in their refinement of short term plans was the project mapping that happened at PEER that allowed each participant to conceptualize the next steps of their projects. §.§ Forethought In the literature, forethought is defined as the process of setting goals, anticipating actions and consequences to reach desired outcomes. In our context, forethought refers to what research tasks emerging STEM education researchers are considering undertaking and what they anticipate they need to successfully complete their first DBER research project. Unlike intentionality where the participants brought up project mapping, several program activities were brought up by participants when it came to forethought: interactions with facilitators, topical discussions, DBER literature, procedural knowledge and project feedback. However, the common theme pre-PEER in forethought was the common need for research project design support. Given the different stages they were at pre-PEER, there were nuances specific to each case participant's DBER project in the actions they foresee and post-PEER these were refined with the nuances relevant to each. Table <ref> summarizes the status of forethought pre and post PEER for participants. §.§.§ Pre-PEER Forethought Olivia has little interactions with the DBER community, even informally, however she had submitted a proposal to the National Science Foundation (NSF) to examine the ways in which those with very little perquisite skills succeed in calculus class with additional support. She had identified many parts of her research process and identified the areas she believed she needed most help, which are refinement of research questions and writing of a science education grant. She is anticipating refining her research questions by engaging with researchers with various backgrounds in the field. She is also anticipating the need to get a broad view of MER and the different steps of the research design process to enhance her grant writing: So a lot of that sort of the nuts-and-bolts aspects of submitting a science education, math education or sort of community transformation sort of grant is clearly, I’m clueless and I could use help on that. In parallel, Madison has had several informal conversations with members of the DBER community. She collaborated with education researchers when she opened up her classroom for data collection for education projects. She has concrete ideas for her research project and knows she needs help refining her research question and project into a tangible and viable study. She anticipates the need for guidance with different steps of the research design process. In particular, articulating and refining her research interest into a viable PER project: I’m coming in with kind of a concrete idea, I would love it if it’s almost like stepping me through what the project should look like. Like, helping me take what I have and think of ways of okay, how do I go with this kind of nebulous idea of faculty learning communities and labs, and how do I do take all these steps we’re going to talk about, like how do you assess the grant, how do you come up with good research. Unlike Madison, Akemi has had few interactions with the DBER community, but she is collecting data in her classroom to pursue her research interest. She has identified the need to better understand the structure of how to conduct DBER research as someone unfamiliar with the research field. She put together a proposal to conduct a research study to promote equity in her physics classroom but wanted guidance on refining her research questions. She also highlights needing help with data analysis to move forward with her research: I do want to learn how to analyze my data, I believe there is something about like coding stuff like that, but I don’t really know how to code my data. So yes, that’s definitely something that I want to learn. In forethought pre-PEER, we examined the common need that all three participants identified: research design refinement, particularly the refinement of their research questions. However, there were nuances in their stages of the research process. Olivia applied for a grant from the NSF so she had made an attempt to identify all the different parts of the research design process. Akemi had put together a PER study proposal at her institution that was accepted and she was already in the data collection phase. Madison had interacted peripherally with PER projects by welcoming researchers into her classroom to collect data. She had identified a research interest, but she had not put together the pieces of her research design. §.§.§ Post-PEER Forethought For Olivia, by interacting with other STEM researchers through the workshops and the MER literature, she describes having a better sense of the DBER community. Readings and interactions with other participants at PEER has broadened her understanding of MER as a mathematician. It has also broadened her perception of what is MER, who does MER and how she perceives the field. She foresees herself playing a useful role bridging the disconnect that can exist between mathematicians and mathematics education researchers. PEER has also helped her articulate some specific actions and consequences she is anticipating as she continues to move forward with her MER project. Although she anticipates finding time to do DBER in her schedule challenging, she views time constraints as keeping her accountable. She will be encouraged to continue her interactions with the DBER community in the near future to complete her current MER project: I’m going to be forced for the next three years to be reaching out to DBER people in some form. And sort of periodically reevaluating whether or not I’m reaching my goals, not just with the project but more broader, like the things I specifically talked about, keeping in contact with people I’ve met and continuing to broaden my reading. She also anticipates some criticism of her work from the broader DBER community because her quantitative analysis does not depict a complete picture of students’ progress in their math courses and she anticipates the need for qualitative lens. For Olivia, the PEER activities that played a role in her growth in forethought were interactions with facilitators with DBER expertise, topical small group discussions and guiding engagement with DBER literature. As for Madison, different elements contribute to her growth in forethought. She has a better sense now of what a viable education research process is and how to transform research interest into a research project in PER. She foresees seeking out similar interactive professional development programs that focus on participants’ specifics research projects: [PEER helped identify] how do you go about transforming something you might be curious about into something that’s a viable research project? And I really like that. [...]Even if it was just like webinars or something, I would love to continue to engage with this because I feel like the workshops were… I like that they were really interactive, I like that they gave us a lot of time to work on our projects ourselves and I would love to do that in like a more guided sense. Obtaining get-started information has provided her with a roadmap on how to move from research interest to a viable research question and project in DBER. She has learned how to articulate her research question and refine it through the many successive opportunities in the PEER workshops that broke down the tasks related to DBER into manageable pieces. In particular, the PEER activities that played a role in her growth in forethought were procedural knowledge workshops, topical small group discussions and individualized project feedback from facilitators. For Akemi, obtaining information on how to get started has provided her with a roadmap on how to move her research project forward. She has gained insight into the significance of the different parts of the research design process such as theory and limitations. She values how explicitly DBER people think about the limits of their understanding, which she did not see much of in condensed matter physics. She believes that she has a far better idea of what her project is and how to move forward with it, as compared to before doing PEER. To situate her work within the field, she anticipates framing her papers in a similar structure to the PER literature she has been engaging with. She anticipates being better at assessing work related to her research topic because she has a good solid background on the foundational work in her research interest area: I think, I mean, if I see some new theory, I’ll definitely pay more attention about to learn that. If they are talking about self-efficacy and they are not using Bandura, it’s like Bandura is everywhere and then so currently, I haven’t found anything new on the theory that I’m doing, but if I find a different one, I would use that as a keyword to find more paper. For Akemi, the PEER activities that played a role in her growth in forethought were mainly procedural knowledge workshops and gudiment engagement with DBER literature. Workshop structure, content and community at PEER helped research project refinement for Olivia, Madison and Akemi. For Olivia, workshops that addressed refinement of research questions iteratively, setting specific DBER plans for the near future addressed aspects of the research design she had identified needing pre-PEER. For Madison, the iterative process of the research design that participants went through at PEER helped her refine her research design. She has also identified more specific parts of the research design she will want help with in the future. For Akemi, through readings and discussions, she learned about the norms of the field and how to situate and shape her project within it. §.§ Self-reactiveness In the literature, self-reactiveness refers to motivation and self-regulation needed to execute actions planned. In our context, self-reactiveness refers to what interest in DBER emerging STEM education researchers discuss, especially what drives their intrinsic motivation to engage in DBER. Similarly to forethought, a common interest motivates participants in engaging in DBER and multiple program elements affect growth in self-reactiveness. However, more program elements are highlighted and impact the nuances of self-reactiveness post-PEER than forethought. Table <ref> summarizes the status of self-reactiveness pre and post PEER for participants. §.§.§ Pre-PEER Self-reactiveness For Olivia, competence and relatedness are the two components of self-determination theory that help her the most in doing DBER at her institution. She wants to do MER to increase her competence in teaching to increase student success and persistence in math courses. She also wants to relate her research results from her classrooms to her institution: The driving force for me, and I know that math educators often don’t really want to talk about this in this way, has been to see students be more successful. Specifically, to pass at higher rates and to continue sort of to the next course at higher rates. And what, you know, I’m not interested in just what happens in my class, I’m interested in what happens at the institution. To have productive conversations about instructional change in her department, Olivia wants to use the results of her own research-based findings from MER. This refers to the relatedness of doing this type of research as it provides a means to communicate with evidence based, context-specific ways, her research results to her colleagues: You know, all my colleagues are math professors, which means you can’t just walk up to them and say, “hey, let’s try this thing.” If you don’t start with something that’s evidence based, if you’re not starting from a point of scholarship, you’re not going to get started. This last excerpt also underlines the value and the potential impact that DBER scholarship can have in bringing many faculty members part of her department on board in making instructional change. Olivia also wants her math department and university to find better ways to assess student learning, which ties into the self-regulation component of self-reactiveness. She is supported in her DBER engagement because of its potential to addresses critical and current needs at her institution: it is a context-specific, yet research-based way to improve success and retention in mathematics courses. Similarly competence, relatedness and autonomy are all elements that motivate and self-regulate Madison's engagement in DBER. In terms of competence, Madison wants to become a better physics teacher by improving her classroom practices. She describes doing PER is a way for her to become better at her job: I’ve also found myself really interested in physics education research, both as, you know, using it to help inform my teaching, but also, I’m just interested in learning more about how to be a good physics teacher. Features of autonomy and relatedness are discussed when Madison describes the freedom to pursue various new teaching evidence-based strategies in the classroom: So I feel like there’s been a lot of freedom there to pursue different teaching routes and, you know, this comes up in things like tenure and promotion too. Like, our department puts, I think, a good deal of weight and will give you a lot of credit for going and trying these new pedagogy. She articulates that research-based teaching practices are valued in tenure and promotion evaluations. This external regulation provided by her department motivates engagement in instructional change. PER is encouraged due to its potential benefits for student learning in a primarily undergraduate institution that attracts underrepresented groups and wants to best prepare them for their post-undergraduate careers. As for Akemi, competence is the most prominent component of self-determination theory that motivates her to engage in DBER. She wants to do PER projects to create more equitable learning environments for students in her classroom. She wants to investigate the ways in which she can increase self-efficacy of underrepresented students in her physics courses: I’m interested in that [doing physics education around promoting equity] because I found like minorities in classrooms are usually either lack of self-efficacy, a lack of confidence, or the opposite, they think they are good, they don’t know that they are bad at this stuff. So I just I’m interested in like how my students are doing and how they are thinking, and I think [doing a PER project around] that [can] help. Akemi does not articulate the ways in which her PER work will be evaluated or the ways she will assess her own endeavors in this new field of research for her. This is most likely due to her being currently in a temporary faculty position during this interview as she says that she is a visiting professor and is not required to do research. Pre-PEER in self-reactiveness, we identified that Olivia was motivated to do MER to improve passing and retention rates at her institution. Madison wanted to be a better physics teacher by engaging in PER to improve her classroom teaching practices. Akemi wanted to create more equitable physics classrooms so engaging in PER would allow her to investigate the interplay between self-efficacy and underrepresented populations in physics classrooms. Olivia and Madison related their motivation to the value of doing DBER would bring to convincing colleagues of instructional change, benefiting students at their respective institution and getting recognized for this type work in promotion and tenure evaluations. §.§.§ Post-PEER Self-reactiveness For Olivia, competence remains a primary motivator. She continues to want to improve teaching by doing MER at her particular institution. In terms of self-regulation, Olivia discusses her autonomy as she is reflecting on some of the discussions that occurred at PEER around mentorship and ways one can discuss the value of DBER in a department that may not be supportive of this type of research. She recognizes how much freedom she has compared to less senior faculty in pursuing DBER: doing a research study, getting results, making recommendations to her department about changes and being heard. Although she is already at an institution that values math education research, she is not expecting rewards from her school to motivate and regulate her engagement in MER. This integrated regulation is considered the type of extrinsic motivation regulator that leads to the most autonomy <cit.>, which shows how Olivia has a high level of autonomy in her MER work. The PEER activities that played a role in her growth in self-reactiveness were topical small group discussions and interactions with a range of career stages. Similarly for Madison, increasing competence remains a major motivator for pursuing PER. Improving teaching practices for the type of students her institution attracts informs her education research interests. Through self-reflection on the societal impact of her job, she hopes to help the student population of her institution get the most out of their education. Improving physics laboratory courses allows the development of technical skills that can be useful for students as they search for jobs after they graduate. The main PEER activity that played a role in her growth in self-reactiveness was generative writing regularly throughout the workshops. Through different programs elements than Olivia and Madison, Akemi refines her interest and her research project’s focus. Her motivation to do PER is still about equity in physics classrooms. By talking to facilitators and engaging with the broader PER literature, she finds ways to specifically enhance her project by situating her work within the vast array of research published about self-efficacy of students in physics classrooms and gender equity <cit.>. Given that she is in-between two temporary teaching positions, she does not elaborate on the ways she will be evaluated in her research endeavors. The PEER activities that played a role in her growth in self-reactiveness were interactions with facilitators with DBER expertise and guiding engagement with DBER literature. All three case study subjects were consistent in their motivation behind their reason to transition into doing DBER. For Olivia, engagement with participants at PEER further highlighted her freedom to pursue MER at this stage of her career and institution. For Madison, self-reflection on her impact as a physics instructor deepened her motivation to engage in PER to improve learning outcomes for her students. For Akemi, situating her work within PER helped her refine her interest. §.§ Self-reflectiveness In the literature, self-reflectiveness is defined as belief in one's perceived competence in their ability to undertake a behavior (self-efficacy). In our context, self-reflectiveness refers to what emerging STEM education researchers perceived competence in DBER to be. Similarly to growth in intentionality, forethought and self-reactiveness, growth happened in self-reflectiveness. However, there were more program elements that came into play in this component, making self-reflectiveness the agency component with the most growth. Table <ref> summarizes the status of self-reflectiveness through pre and post PEER for participants. §.§.§ Pre-PEER Self-reflectiveness Olivia expresses low self-efficacy when she describes her NSF grant proposal process she applied for to engage in MER. She did not ask for help from some of her colleagues because she feels unqualified to do MER compared to them despite collaborating with them in other areas of research: So the one thing I remember about is my professional colleagues that I didn’t collaborate with. Yeah, so the issue is… So I have, I would say, three colleagues, two of whom I’ve actually written research papers in mathematics with, who have done a lot of math ed grants, actually quite a few. And I did not partner with them [...] I was embarrassed. I know so little about it, even less, you know, and I have to admit, you know, I knew I was doing something for which I was unqualified. Her perception is that she does not have the experience that some other people she knows have when it comes to math education grant writing. Vicarious learning, which emphasizes performance comparison, nurtures this sense of low-self-efficacy. Mastery experience is another reason for her sense of low self-efficacy. Compared to her math research, she feels unqualified to do math education research because she does not know how to turn all her research interests into research projects, which a task she is confident in doing in her graph theory research. Similarly, Madison articulates that she lacks confidence in doing PER because she does not know how to carry out the different aspects of the research design. In terms of mastery experiences, she feels that she lacks competence in carrying out this type of research compared to her ability to do so in her experimental physics work: I think I would really, really love to be more confident in myself for my ability to design and carry out an education research project. Like, especially from the nuts and bolts of the education research side of things. Akemi also articulates low self-efficacy when she describes being unaware how to structure the research process. Even though she has put together a proposal that got accepted and she is already engaging in data collection in her classroom, her confidence in her ability to perform PER is low. She expresses throughout the transcript not knowing how to move forward with different steps of the research process if she gets stuck. In self-reflectiveness pre-PEER, there was a common trend of low self-efficacy among all three case study subjects, especially in terms of mastery experiences. §.§.§ Post-PEER Self-reflectiveness Olivia feels that she knows more about MER because she engaged with the MER literature and received informational knowledge about procedures of MER at PEER. This addresses mastery experience as she feels she can draw from her rich experience in math research herself to contribute to the field of MER. In turn, she feels more comfortable reaching out to collaborate with others because she has a better sense of what she can do for a project. Performance and experience comparison with researchers with various backgrounds at PEER, vicarious learning, contributes to her sense of higher self-efficacy: I at least have read some math education research. I at least have gone to a workshop where I learned about some things. I’m not totally ignorant about qualitative research and various kinds of surveys and various things like, you know, getting IRB approval and that sort of thing. I’m not just a complete dead weight to someone else who’s doing DBER research, if that makes sense. I don’t want to be dead weight Post-PEER, her confidence level is higher. She articulates the ways she feels that she can bring something useful to MER and serve as a bridging role among communities. She feels more confident in engaging with the MER community. The PEER activities that played a role in her growth in self-reflectiveness were interactions with facilitators with DBER expertise, interactions with a range of career stages, guided engagement with DBER literature and procedural knowledge workshops. For Madison, increase in self-efficacy is seen through the way she describes the impact of receiving concrete get-started information about PER. Narrowing down her research project to specific steps to write a grant proposal has helped her increase her sense of mastery experience, in turn her self-efficacy: So one of my big goals from this whole thing was like to feel confident enough that I could write a grant for my project. And I feel comfortable, much more comfortable now, that I could put a grant together because I have a much better awareness of like the literature I should be looking for and stuff like that. So that was really nice. Yeah, for me, a lot of the skills are the like what do you… Like, I am much more confident in my ability to start a project. Spending time articulating her research interest into a research question has also contributed to her gain confidence in the work she is doing, addressing the physiological component of self-efficacy: I feel really, really good and confident that I came out with some idea on okay, how do I go from it’s something I might be interested in to carving that into a research question and start to get the research done. The PEER activities that played a role in her growth in self-reflectiveness were guided engagement with DBER literature, topical small group discussions, procedural knowledge workshops and taking the time to do project mapping of goals with specific tasks for both the near (days, weeks) and far (months) future. For Akemi, it was comforting to get feedback on the work she was doing and how it may be useful to the DBER community. Given that she is new to the field, she feared that she might have missed someone else's publication. Verbal persuasion, which occurs when Akemi gets real time constructive feedback from peers and facilitators, plays a positive role in increasing her self-efficacy: So I kind of asked them whether I should… So I say, I’ve already my project on how oral quizzes impact students’ self-efficacy, and then they told me oh, it’s an interesting project, and then it’s not been done. So I think that’s very important information because I’m new to the education, to this field, and although I’ve already did the literature survey and did not find something similar, I always worry like whether I’ve missed some publication Verbal persuasion occurs when Akemi discusses how supportive the feedback at PEER was. She feels that people were enthusiastic about and valued her ideas: I feel like, I don’t know, I don’t feel this often, I hadn’t felt that my opinions are valuable in research for years. At least that’s not my general feeling in my PhD research, so when I kind of talk [...] they[facilitators] really value what I said, and I think that really boosts my self confidence in this area, like feel I can do educational research like that. So I think that helps a lot. Feeling valued in her research endeavors is an element of PEER that boosted Akemi’s self-confidence because she had not felt it in her condensed matter research during her graduate studies. Engaging in regular generative writing really helped her increase her sense of competence, mastery experience: I’m looking at my project, “okay, I can write something out of it,” and then the generative writing sections are really helpful. I don’t know how to do that at the beginning, it’s painful to write, I really hate writing. But now I can really sit down, wow, I can keep typing for one hour, or like half hour. It’s something that I could not imagine me doing, so I think there’s definitely some change in my ability to move my project forward. The PEER activities that played a role in her growth in self-reflectiveness were interactions with a range of career stages, individualized project feedback from facilitators and generative writing. For Olivia, engagement with PEER participants and facilitators, specifically discussing similar interests and comparing experiences with others at different stages of their DBER projects, increased her self-efficacy. For Madison, getting informational knowledge and turning her research interest into a research question translated into gain in self-efficacy. For Akemi, supportive real-time constructive feedback allowed to situate herself within the field and feel welcomed in this field of research, which boosted her self-efficacy. § DISCUSSION The key results of our analysis are summarized in Table <ref> where we highlight program activities as affecting participants' agency, within the theoretical framework. In the table, project mapping refers to mapping of goals with specific tasks for both the near (days, weeks) and far (months) future. Topical discussions refers to topical small group discussions. Facilitator interactions refer to interactions with facilitators with DBER expertise. Career-stage interactions refer to interactions with participants in range of career stages. Project feedback refers to individualized project feedback from facilitators. Generative writing refers to writing as a generative process to keep track of research process, ideas and next steps. DBER literature refers to guiding engagement with DBER literature. Procedural knowledge refers to procedural knowledge workshops. §.§ Interactions within participants for each aspect of agency growth Faculty professional development is highly dependent on home institution type, department priorities, and faculty career stage. As such, to understand how participants develop their agency in this new area of research, it is interesting to see how agency components evolve depending on each participant particular career stage and context. As a Full Professor in a math department, in the course of her engagement with PEER, Olivia focused on developing her existing research project and developed an understanding of where she can situate herself in mathematics education research (MER). The most noticeable growth in agency occurred thanks to her engagement with participants at various career stages and with various DBER expertise. This engagement really highlighted the autonomy she has as a Full Professor in her research endeavors in MER, leading to growth in self-reactiveness. This also translated into growth in self-efficacy as she was able to articulate what she could contribute to the field when engaging with both the math and math education research communities. As an Associate Professor in a physics department, Madison focused on narrowing down her research questions and getting started on writing an NSF grant to fund her physics education research (PER) project to expand her research portfolio. As a tenured professor, she has some leeway in pursuing different research interests, especially when finding evidence-based practices contextualized in her department is increasingly becoming a priority for her institutions. The procedural knowledge and the time to reflect and articulate her research interest during PEER led to growth in forethought and self-reflectiveness, leading to overall gain in agency. As an early-career professional, in the course of her engagement with PEER, Akemi focused on refining her research project and getting started with data collection for her project to see where PER could fit within her career trajectory, which led to overall growth in the agency. Mentorship and guidance from PEER facilitators, increased her sense of competence in self-reflectiveness and refined her motivation in self-reactiveness to pursue her research projects in PER. Although we see agency growth for each participant in this study, this exploratory analysis draws upon self-reported data of three faculty’s experiences, which cannot be generalized to all emerging STEM education researchers. Future work should include other participants’ experiences to explore contrasting experiences with agency growth, especially for STEM faculty at different career stages and at different types of institutions. §.§ Program activities across theory elements Exploring the impact of program activities across agency components provides evidence of activities that impact agency when designing a professional development program. Supporting activities in the growth of self-reactiveness were discussions of similar interest with participants and facilitators, engagement with key DBER literature and opportunities for self-reflection. Growth in intentionality occurred through the setting of specific DBER plans for the future, which enabled participants to break down research projects into specific and measurable steps to move forward. Growth in forethought occurred through receiving get-started information, engagement with peers, engagement with the DBER literature and the division of tasks into manageable pieces with multiple iterations. All these elements provided participants the opportunity to refine their projects and anticipate specific actions and consequences they foresee as they move forward with their projects. One or a combination of sources of self-efficacy contributed to growth in self-reflectiveness. Verbal persuasion learning through getting real time constructive feedback translated to self-efficacy increase. Vicarious learning through comparison of researchers' expertise with various backgrounds contributed to an increase in self-efficacy. Mastery experiences occurred through transformation of general interest to specific questions and receiving procedural knowledge about the field. Articulation of realistic and specific addressed the physiological component of self-efficacy. Bandura says that self-efficacy is one of the strongest components in agency growth during change and adaptation in the workplace <cit.>. It is not surprising that increased self-efficacy echoed more broadly to gain in other areas of the agency framework. Nonetheless, varying and overlapping activities resonated with participants, which showcase various possible ways a professional development can contribute to increase a sense of agency in a new research area. Program elements discussed in self-reflectiveness are the only ones that span across all other components of agency (forethought, intentionality, self-reactiveness). Our case study participants articulated the ways in which built-in within the structure of PEER are activities and interactions that address each component of self-efficacy. These elements of PEER that increase self-efficacy carry over to the three other components of agency, leading to overall gain in agency. Program elements in forethought, intentionality and self-reactiveness stem from any exposure to the research process. They are not unique to getting started in DBER, exposure and engagement with a research community will inevitably refine ideas in each of those areas. However, what we find is that the PEER program provides structure to these elements that seem to resonate quite strongly with participants. PEER provides the space and community to be an agentic emerging STEM education researcher. PEER facilitates engagement in research tasks that jump start emerging STEM education research transition into DBER, especially when they have extensive training in other areas of research and experience in teaching. Thus, STEM faculty who already have extensive training in research and myriad of teaching experiences in their specific discipline can chair their first research project in DBER when agency is a central tenet of the professional development opportunities they engage in. In contrast, some program elements such as the setting of expectations norms and some procedural workshops (e.g. observational data and theory workshops) were not brought by these three case study participants. In this analysis, they were not factors explicitly affecting their agency. However, this does not mean that these activities do not affect other participants’ agency and/or have a programmatic impact that leads to agency growth. First, the expectations and norms setting puts forward the principles of PEER for participants engagement and community building, which makes this professional development opportunity an experiential learning experience in which agency growth happens as a consequence of that. Second, the specific workshops not brought up may not have impacted agency development for these participants, but may have done so on others depending on where they are at with their research. If their research interest is not immediately tied with observational data, it may not have had a significant enough impact to be brought up during interviews. In addition, we are looking at growth and some topics such as theory that are overwhelming and an area of struggle for emerging education research <cit.> may not be brought up in this analysis lens. DBER's interdisciplinarity and the myriad of ways it is conducted can be challenging for new researchers interested in the field. For emerging STEM education researchers, finding professional development that addresses their concerns from an agentic perspective is a need that must be fulfilled. Support structures can come in various ways, but our research shows the process by which a professional development opportunity worked in favor of increasing self-efficacy and echoed more broadly into agency. This agency growth can sustain engagement in DBER and increase DBER research in different institutional contexts and improve STEM education through effective evidence-based practices that stem from the particular needs of the institutional contexts in which the research interest orignates. To build capacity and community for STEM education research, the DBER community should create professional development opportunities that focus on supporting agency in engaging in DBER, particularly self-efficacy, for STEM emerging education researchers. § CONCLUSION To improve STEM education, some STEM faculty jump start their transition in DBER at different stages of their career. To support their endeavors to conduct DBER in different instructional settings, our study identified elements of a professional development program that increase agency. Our case study analysis showed that addressing one or a combination of self-efficacy sources echoed into growth of other components of agency. This overall gain of agency supports emerging discipline-based education researchers' transition to the field. We thank Christopher Hass for his help with the data collection and inter-rater reliability. We thank Elizabeth Kustusch for transcribing our interviews. We also thank PEER participants, particularly our case study participants, for allowing us to study the experiences of emerging STEM education researchers. We also would like to thank the PEER facilitators who contributed to this virtual edition of the program. This work was supported by NSF DUE 2025174 / 2025170 and 1726479 / 1726113.
http://arxiv.org/abs/2307.04245v1
20230709185117
A Novel Pipeline for Improving Optical Character Recognition through Post-processing Using Natural Language Processing
[ "Aishik Rakshit", "Samyak Mehta", "Anirban Dasgupta" ]
cs.CV
[ "cs.CV", "cs.AI" ]
VR Job Interview Using a Gender-Swapped Avatar Susan R. Fussell August 12, 2023 ============================================== Optical Character Recognition (OCR) technology finds applications in digitizing books and unstructured documents, along with applications in other domains such as mobility statistics, law enforcement, traffic, security systems, etc. The state-of-the-art methods work well with the OCR with printed text on license plates, shop names, etc. However, applications such as printed textbooks and handwritten texts have limited accuracy with existing techniques. The reason may be attributed to similar-looking characters and variations in handwritten characters. Since these issues are challenging to address with OCR technologies exclusively, we propose a post-processing approach using Natural Language Processing (NLP) tools. This work presents an end-to-end pipeline that first performs OCR on the handwritten or printed text and then improves its accuracy using NLP. OCR, NLP, Handwritten Text, Transformer, Paddle-Paddle § INTRODUCTION Optical Character Recognition (OCR) is a technology for extracting texts from images containing text information <cit.>. Such images occur from photos containing text information, scanned documents, scene photos, subtitle text superimposed on an image, etc. OCR is useful as images consume more memory space than text files. Moreover, text information is easier to copy and edit and helpful in many artificial intelligence (AI) tools, particularly for Natural Language Processing (NLP) problems. Some general applications include self-service utility meter reading, intelligent traffic surveillance and parking system, license plate recognition, contactless check-in at private and public transportation stations, intelligent security systems, digitizing old books, etc. <cit.>. As such, OCR helps to reduce crime, increase police efficiency, and improve safety <cit.>. The OCR methods recognize characters in the image independently by image segmentation considering only the shape and structure of the characters. Significant research on OCR has been reported on recognizing texts from scanned documents, and number plates, with sufficient performance. Even OCR on handwritten texts in different languages has received much attention, however, with limited accuracy. Hence, there is scope for improvement in the efficiency of OCR of handwritten text. Even the OCR of printed text is yet to be perfect. The prime challenges for inaccurate or missing text are as follows: * variations in font style and size, * case sensitivity, * similar character shapes, such as `o' and `0', * varying orientations. These OCR mistakes negatively impact several NLP applications, including text summarizing, part-of-speech (POS) tagging, sentence boundary detection, topic modeling, named entity recognition (NER), and text classification. The ability of NER tools to detect and identify proper nouns and classify them into the person, place, and organization categories significantly deteriorates when the error rate (ER) of OCR output rises. Post-processing OCR outputs can significantly help correct these mistakes and increase the accuracy of the outputs. Hence, the objective is to develop an end-to-end pipeline that first performs OCR on the single-line handwritten or printed text and then improves its accuracy by post-processing the OCR output using NLP. §.§ Prior Art The current OCR approaches use Convolutional Neural Network (CNN)-based encoders for picture interpretation and Recurrent Neural Network (RNN)-based decoders for text generation. The two most popular OCR models are the Transformer-based OCR (Tr-OCR) model <cit.> and the Paddle-Paddle OCR (PP-OCR) model <cit.>. The Tr-OCR model uses the Transformer architecture for workpiece-level text generation and image understanding. TrOCR has a pre-trained image Transformer as an encoder with the decoder as a pre-trained text Transformer. This model has been trained on the IAM handwritten dataset. The PP-OCR model consists of text detection, text recognition, and detected box rectification using a convolutional recurrent neural network (CRNN) as a text recognizer at the back end. The CRNN has convolutional layers for feature extraction followed by recurrence for sequence modeling. These architectures produce efficient results if trained on a specific type of data. However, generalizing is difficult on unconstrained datasets due to the large variability. In the domain of OCR output correction, the prior algorithms used mainly operate on the standard pipeline with the delete operation, followed by transposes, followed by replaces, and finally, inserts. This method used in implementing text blob's spelling correction has taken Peter Norvig's "How to Write a Spelling Corrector" <cit.> as ground truth for training. This approach is improved using Symspellpy <cit.>. The symmetric delete spelling correction algorithm lowers the complexity of edit candidate generation and dictionary lookup for a specific Damerau-Levenshtein distance. It is language-independent and is about six times faster than the traditional approach. § MATERIALS AND METHODS This firstly evaluates two OCR models in this work, viz., Tr-OCR and PP-OCR, on various handwritten and printed datasets. Thsi work then choose the better-fitting model for recognizing single-line handwritten text. A line segmentation module for segmenting a multi-line document into single lines and a classifier that classifies each of these single lines into printed or handwritten text are also implemented. The output of the OCR model is then fed to our post-processing model, which improves the accuracy of the OCR output. The OCR output post-processing task aims to identify the sequence of words X = x_1 x_2... x_m present in the original hardcopy document given a sequence of n OCR degraded tokens Y = y_1 y_2... y_n. It should be noted that n and m are not always equal because segmentation errors could result in OCR sub-sequences that are not correct word sequences. We divide our work into two modules. The first consists of the segmentation unit, the classification unit and the OCR model unit. The OCR models are evaluated on various real-life datasets. We then select the better-fit model as input to the second module, i.e., NLP-based post-processing. This module takes in the outputs of the OCR model and then post-processes it using NLP techniques to minimize error. §.§ Module-A: OCR Engine Module A consists of the first half of the pipeline which is to first perform line segmentation on a multi-line document, then classify each line into printed and handwritten text using a classifier, then perform OCR on it using a suitable OCR model. Evaluation has been performed on two existing popular OCR models on various datasets with different fonts, handwritten dataset, dataset with occluded or background color and noise. §.§.§ Segmentation The aim here is to Segmenting lines in documents using A* Path planning algorithm <cit.>. The method to achieve this is to: * We first input a non-skewed document of either handwritten or printed text into this model and then convert the input image to 2D grayscale image. * We use sobel filter to detect the text edges in the image. The image is convolved with two 3*3 kernels (horizontal and vertical), to calculate the image derivatives. * We then find the horizontal projection profile of the edge detected image. HPP is calculated by the array of the sum of elements in each row. So more peaks will be seen corresponding to the rows that have text whereas the blank areas will not peak in the HPP graph. * We then detect peaks, for which I take the threshold of one-fourth difference of maximum hpp and minimum hpp value. This helps in dividing the potential line segment regions from the text. * We then make a cut in places where upper line text connects with the lower line text. * We then use the A* path planning along the segmentation region and record the paths. This helps in segmenting the document into single lines. §.§.§ classification Convolutional neural networks (CNN) are used to classify text lines as either printed or handwritten, however it is actually the collection and preparation of the data that presents the biggest challenges. Presenting enough samples to an artificial neural network (ANN) is sufficient to achieve a decent level of accuracy for a wide range of tasks. In fact, current artificial neural networks (ANN) are already capable of handling extremely complicated data (such as ImageNet, which includes 90 different dog breeds to discriminate). The system we created for this work is a DenseNet-121 that has been modified for the binary classification of handwritten and printed text. It is wrapped in some utility classes. A convolutional neural network called DenseNet-121 has 121 layers, the majority of which are tightly connected in 4 blocks. However, compared to designs with more parameters, it has a comparatively low number of parameters for a network of its size and so requires less training data. More information on the classifier used can be found in <cit.>. §.§.§ Datasets The specific datasets that we have used for the purpose are: * Born-Digital Images Dataset <cit.>: This dataset contains images made digitally employing a desktop scanner, a camera, and screen capture software. It has 3564 images of words clipped from the actual images and a text file containing the ground truth transcription of all images provided. * Incidental Scene Text Dataset <cit.>: This dataset consists of 4468 cut-out word images corresponding to the axis-oriented bounding boxes of the words provided and a single text file with the ground truth. * License Plate Dataset <cit.>: This dataset has 209 cropped license plates using the original bounding boxes and has all the single characters labeled, creating a total of 2026 character bounding boxes. Every image comes with a .xml annotation file. * Single Line Handwritten Text Dataset <cit.>: This dataset <cit.> contains images of handwritten single-line English texts whose labels are similar to the IAM dataset. There are around 400 images along with their labels. * Bing Images of Short Quotes: This dataset contains about 215 images of short quotes with different background styles. This dataset is unlabelled as its primary purpose is to see the improvements in the outputs after post-processing using NLP. §.§.§ Performance Metrics The performance evaluations used are character error rate (CER) and word error rate (WER) evaluation metrics. The CER gives the fraction of the number of characters correctly identified, including spaces. The WER is the fraction of the number of words correctly output in reference to the ground truth text. §.§ Module-B: NLP Engine The models we consider are as follows: §.§.§ ByT5 The Google AI team debuted T5 <cit.>, also known as a Text-To-Text Transfer Transformer, in 2020. The encoder-decoder structure of the T5 transformer model is identical to that of conventional transformer models. There are 12 pairs of blocks of encoder-decoders in it. Self-attention, a feed-forward network, and optional encoder-decoder attention are all present in each block. The ByT5 <cit.> proposes a new model that can directly process raw text, i.e., it would be token-free. The benefits are as follows: * They can process text in any language. Tokenizers tailored to specific languages are not necessary. * They reduce the trouble of having complicated text preparation pipelines and are noise-resistant. * Now that we only need 256 embeddings for a byte-level model, we no longer need a large vocabulary matrix. §.§.§ BART Bidirectional and Auto-Regressive Transformer <cit.> BART is a pretraining denoising autoencoder for sequence-to-sequence models. The text is first corrupted using a random noise function, and then a model is learned to recreate the original text to train the BART model. It employs a typical Tranformer-based neural machine translation architecture that, despite its simplicity, generalizes several more modern pretraining approaches, including GPT with its left-to-right decoder and BERT (owing to the bidirectional encoder). The dataset used to train the models in a supervised manner was generated synthetically from the OSCAR Corpus. §.§.§ Alpaca-LORA The Alpaca model was optimized through fine-tuning from Meta's LLaMA 7B model, which was achieved through supervised learning on a set of 52K instruction-following demonstrations generated from OpenAI's text-davinci-003. The process of generating the dataset resulted in 52K distinct instructions and corresponding outputs, and was accomplished at a cost of less than $500 by utilizing the OpenAI API. Hugging Face's training framework was used to fine-tune the LLaMA models, with techniques such as Fully Sharded Data Parallel and mixed precision training being employed. The fine-tuning process of a 7B LLaMA model was accomplished in 3 hours, using 8 80GB A100s. We used the Alpaca model in a zero shot manner and it was run in 8-bit precision using bits and bytes. We tried multiple prompts with the Alpaca-LORA 7B model and the one that worked the best for us was f"Fix all the errors in the sentence : text" §.§.§ Synthetic Dataset Generation OCR degraded text is generated for training our byT5 Transformer model using the nlpaug <cit.> library. The OCR Augmentor is used, which can be used to generate character-level errors in the text of the OSCAR <cit.> Corpus. §.§.§ Preprocessing Inputs To prevent any discrepancies in the lengths of the original Text and the Text Generated by the model with the ground truth, we chunk the texts into lengths of 128 words, but as subword tokenization is being used, we set the max length to 256 but replace all the padding tokens with -100 to prevent loss calculation for them. §.§.§ Post-Processing Model Outputs The correct spacing insertion into the output from the model is performed using the output distribution. Given a text corpus, we assume that all words are dispersed separately. The relative frequency of each term is then all that is required to know. It is logical to take that they adhere to Zipf's law <cit.>, which states that the probability of a word having rank n in a list of words is approximate 1/nlog N, where N refers to the total number of words in the corpus. After the model is fixed, we can utilize dynamic programming to determine the spaces' locations. The sentence that maximizes the product of the probabilities of each individual word is the most likely one, and dynamic programming makes it simple to calculate. We use a cost defined as the logarithm of the probability's inverse to prevent overflows rather than utilizing the probability itself. This has been done using the word ninja <cit.> library. § RESULTS §.§ OCR model evaluation We will first discuss the results of the two OCR systems (PP-OCR and Tr-OCR) on various datasets, as discussed above, without any postprocessing. We then proceed to show results of the segmentation and classification sub-modules. §.§.§ Dataset 1: Born-Digital Images Dataset The outputs of some sample images in Fig. <ref> are shown in Table <ref>. The ultra-weight PP-OCR model, pre-trained in English and Chinese languages, resulted in a CER of 0.44. While the Tr-OCR model was fine-tuned on the SROIE printed text dataset, it resulted in a CER of 0.3. Hence Tr-OCR performed better than PP-OCR on this dataset. §.§.§ Dataset 2: Incidental Scene Text Dataset The outputs of some sample images in Fig. <ref> are shown in Table <ref>. Using the ultra-weight PP-OCR model, which is pre-trained in English and Chinese languages, resulted in a CER of 0.65, while the Tr-OCR model fine-tuned on the SROIE dataset (printed text) resulted in a CER of 0.41. Hence Tr-OCR performed better than PP-OCR on this dataset. §.§.§ Dataset 3: License Plate Dataset This dataset consists of 209 cropped license plates (as seen in Fig. <ref>) using the original bounding boxes and has all the single characters labeled, creating a total of 2026 character bounding boxes. Every image comes with a .xml annotation file. The outputs of some sample images in Fig. <ref> are shown in Table <ref>. Using the ultra-weight PP-OCR model pre-trained on English and Chinese languages resulted in a CER of 0.18. While the Tr-OCR model was fine-tuned on the SROIE dataset having printed text, it resulted in a CER of 0.24. Hence PP-OCR performed better than Tr-OCR on this dataset. §.§.§ Dataset 4: Single Line Handwritten Text Datasett This dataset contains handwritten single-line images (as seen in Fig. <ref> and Fig. <ref>), and it's labeled similarly to the IAM dataset. Around 400 lines of handwritten images with their labels are provided. Using the ultra weight PP-OCR model, pre-trained on English and Chinese languages, resulted in a CER of 0.53 and a WER of 0.8. While Tr-OCR model pre-trained on the IAM dataset of handwritten text resulted in a CER of 0.09 and WER of 0.24. Hence Tr-OCR performed better than PP-OCR on this dataset. The outputs of some sample images in Fig. <ref> and Fig. <ref> are shown in Table <ref> and Table <ref> respectively. §.§ Classification The model to classify the text into handwritten and printed text was tested on 2 datasets i.e. the Bing Images of Short quotes (discussed earlier) and a self made handwritten dataset of around 30 images. In handwritten document dataset, it classified 30 out of 32 images correctly as handwritten text and 2 incorrectly as printed text. In printed quotes dataset, it classified 191 out of 198 images correctly as printed text and 7 incorrectly as handwritten text. Overall the classification model has an accuracy of about 96%. §.§ Module-A pipeline Results A mutli-line document is first fed to the segmentation module which breaks the document down to single line texts, each line is then fed to the classification model which classifies it as handwritten or printed text. If it is handwritten text, the TrOCR model trained on handwritten text is used to perform OCR on it and if it is classified as printed text then the TrOCR model trained on printed text is used to perform OCR on it. The OCR output for each line is then clubbed and the output corresponding to the input document is obtained. Figure <ref> is an example of a handwritten document. After segmenting it into individual lines we get we get Figure <ref>. The classification model classifies each line correctly into printed text as show in Table <ref>. We then perform OCR using TrOCR model pre-trained on handwritten text. Results obtained are shown in Table <ref>. The CER for this example was 0.079 and WER was 0.2. Similarly we performed this pipeline over few more examples; both printed and handwritten. The average CER over all these examples is 0.103 and average WER is 0.274. §.§ Results after Post Processing The figures <ref> and <ref> are two results of our pipeline revealing the Images of One Line Quotes, the OCR Output, and the Post-Processed output. The OCR Output for Fig.<ref> and Fig.<ref> shows how the spaces and spellings are corrected by the proposed pipeline. From Table:<ref> and Table:<ref>, we can see that both the CER and WER on the datasets get reduced to a great extent. § CONCLUSION The evaluation of the two OCR models viz. PP-OCR and TrOCR over different datasets showed that TrOCR outperforms PP-OCR in all the datasets except the License plate dataset. A fine-tuning of the TrOCR is required on the License dataset to provide improved results, which can be considered as a future scope. Tr-OCR can be used for OCR of printed and handwritten texts as it gives better results in both cases. The line segmentation module works well for non-skewed documents. For skewed documents, another algorithm has to be developed for segmentation which can be considered as another future scope. Similarly, our OCR output Post Processing Pipeline effectively reduces the errors in the OCR Degraded text. This observation can be seen in our results, where for the first synthetically generated dataset, the WER of the OCR Output came down from 0.455 to 0.045, and the CER came down from 0.124 to 0.005. Similarly, on the Kaggle Single Line Dataset, the CER decreased from 0.169 to 0.023 and WER from 0.363 to 0.135. § ACKNOWLEDGEMENT The authors like to thank the funds received from IITG Startup grant (xEEESUGIITG01349ANRD001) for the research. IEEEtran
http://arxiv.org/abs/2307.04450v1
20230710100015
Toward a generative modeling analysis of CLAS exclusive $2π$ photoproduction
[ "T. Alghamdi", "Y. Alanazi", "M. Battaglieri", "L. Bibrzycki", "A. V. Golda", "A. N. Hiller Blin", "E. L. Isupov", "Y. Li", "L. Marsicano", "W. Melnitchouk", "V. I. Mokeev", "G. Montana", "A. Pilloni", "N. Sato", "A. P. Szczepaniak", "T. Vittorini" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-th" ]
JLAB-THY-23-3881 [email protected] AI-supported algorithms, particularly generative models, have been successfully used in a variety of different contexts. In this work, we demonstrate for the first time that generative adversarial networks (GANs) can be used in high-energy experimental physics to unfold detector effects from multi-particle final states, while preserving correlations between kinematic variables in multidimensional phase space. We perform a full closure test on two-pion photoproduction pseudodata generated with a realistic model in the kinematics of the Jefferson Lab CLAS experiment. The overlap of different reaction mechanisms leading to the same final state associated with the CLAS detector's nontrivial effects represents an ideal test case for AI-supported analysis. Uncertainty quantification performed via bootstrap provides an estimate of the systematic uncertainty associated with the procedure. The test demonstrates that GANs can reproduce highly correlated multidifferential cross sections even in the presence of detector-induced distortions in the training datasets, and provides a solid basis for applying the framework to real experimental data. Toward a generative modeling analysis of CLAS exclusive 2π photoproduction T. Vittorini0009-0002-4390-5670 August 12, 2023 ========================================================================== § INTRODUCTION Photoproduction of two pions, with photon energies in the few-GeV range, is an important process in hadron spectroscopy. It has been widely used to address several fundamental quests, such as the `missing baryons' problem, and to demonstrate that multiparticle final states are necessary to determine the spectrum. While copious data are available for single-pion photoproduction, and the correspondent phenomenology is well understood, the addition of a third particle in the final state makes the description of this reaction considerably more complicated. At fixed photon energy, the unpolarized single-pion photoproduction cross section is described by a single independent variable, while for two pions three additional variables are needed. At beam energies of a few GeV, the highest statistics data sample is available from the Jefferson Lab Hall B CLAS experiment  <cit.>. Even in this case, some bins in the multidimensional space are unpopulated or subject to large statistical fluctuations. This results in large uncertainties in extracting the underlying reaction mechanisms. The problem has been addressed by studying one or two variables at a time, while integrating over the others. During integration, correlations between variables, which in turn contain relevant physics information, are partially lost, making the results strongly model dependent. In this context, generative models based on machine learning (ML), which learn the original data distribution and create new so-called synthetic data that mimic the original distribution, can provide new opportunities for extracting the physics information preserving correlations. Furthermore, these models can provide another way to extract the `true' values from experimental data removing detector effects, with a procedure known as unfolding. Recently, an event-level unfolding analysis using generative adversarial networks (GANs) in inclusive electroproduction was performed <cit.>. The analysis was able to reconstruct accurately single-variable cross sections. Here, we extend our analysis framework to a multiparticle final state, demonstrating for the first time that GANs can be used to reproduce scattering reactions in a higher dimensional phase space. Specifically, we optimize our ML analysis framework to the case of two-pion photoproduction at CLAS kinematics. This study serves as an excellent testing ground for evaluating the effectiveness of the ML analysis framework in a highly nontrivial case. The presence of baryon and meson resonances with diverse production mechanisms, which overlap within a limited phase space, generate intricate structures and correlations. Moreover, the CLAS detector's highly non-uniform response introduces additional complexities and distortions, adding another layer of complication to the analysis. To test and validate the framework, we generate Monte Carlo (MC) pseudodata with a realistic model of two-pion photoproduction. We produce a synthetic copy with an “unfolding” GAN trained on pseudodata that incorporate detector effects through GEANT simulations <cit.>. This would be equivalent to train the GAN with experimental data. The detector effects are unfolded using a “detector-simulation” GAN, independently trained on a second MC pseudodata sample generated according to phase space and passed through the GEANT model of the detector. We test the quality of the procedure by a quantitative comparison between the generated MC data and its synthetic copy. This closure test, based on MC pseudodata, is a necessary step before applying our analysis framework to experimental data. The paper is organized as follows: in Sec. <ref> we review the importance of two-pion photoproduction in hadron spectroscopy and provide a detailed description of the kinematics. In Sec. <ref> we describe the MC framework used to generate pseudodata and incorporate the CLAS detector response. In Sec. <ref> we present the ML framework used for reproducing the detector effects and unfold the `true' distributions from the reconstructed pseudodata. The GAN results are reported in Sec. <ref>, where we compare the generated events with the synthetic copy. Finally, in Sec. <ref> we summarize the procedure and outline work in progress to extend the current framework to the analysis of real CLAS data from Jefferson Lab. § TWO-PION PHOTOPRODUCTION §.§ The physics case The ππ N final state is one of the largest contributors to the total photoproduction cross section off protons at center-of-mass (CM) energies W≲ 2.5 GeV. Studies of this final state have considerably extended the available information on the spectrum of the excited states of the nucleon (N^*) and their photoexcitation amplitudes. The quantum numbers of these resonances can be assessed by studying the correlations between the invariant mass and the angular dependencies of their decay products. Theoretical estimates based on phenomenological approaches <cit.>, continuum Schwinger methods <cit.> as well as from first principles within lattice QCD calculations <cit.>, have predicted more states than apparently observed in experiments (for reviews, see Refs. <cit.>), which is referred to as the `missing baryons' problem. A strategy to improve the sensitivity to the most elusive states is to impose consistency constraints by performing combined analyses of several final states at once, with ππ N playing a pivotal role for the resonances heavier than 1.6 GeV. This allows one to disentangle process-dependent nonresonant contributions, and extract the resonance properties in a nearly model-independent manner <cit.>. Furthermore, combining photoproduction and electroproduction data has recently proven to be effective in identifying overlapping resonances with the same quantum numbers, as in the case of the N(1720) and N'(1720) states <cit.>. In the same reaction, by looking at the invariant mass distribution of the ππ pair, one can study meson resonances, such as the ρ or the f_2(1270). While the properties of these resonances are well known, a detailed understanding of their production mechanisms is still missing. At low W ≲ 2 GeV one can study how each N^* state contributes to the meson production process. At higher energies, above the N^* resonance region, the reaction is well described in terms of Regge theory <cit.>. The two energy regimes are smoothly connected, making it nontrivial to study the intermediate region rigorously. A formalism to do so has been proposed recently for the production of single π or η mesons <cit.>. The extension to two-pseudoscalar final states requires having the full multidimensional dependence under control <cit.>. In particular, a complete understanding of meson production mechanisms in the ππ N final state, where resonances are well known, is necessary before facing the more complicated ηπ N and η^'π N channels, where exotic hadrons are expected to appear <cit.>. §.§ γ p →π^+ π^-p kinematics Measurement of the three-body final state in two-pion photoproduction represents a significant challenge to experiment. Recently a large body of data on π^+π^-p photoproduction observables has become available from measurements by the CLAS Collaboration, with W ≤ 2.9 GeV <cit.>. For a given collision energy, the differential cross section for this process depends on five independent variables, which can be chosen to be the invariant masses of the two pions, M_π^+π^-, and the proton-π^- pair, M_pπ^-, and three angles in the CM frame. Two of the angles are the polar angle θ_π^+, with the z-axis along the photon three-momentum, and the angle α_[π^+ p][π^-p'] between the plane containing the initial target proton p and π^+ three-momenta and the plane containing the π^- and recoiling proton p' three-momenta. An equivalent choice would replace θ_π^+ with the invariant momentum transferred t_π^+, defined as the difference squared between the photon and π^+ four-momenta. The fifth variable ϕ is the azimuthal angle of π^- with respect to the plane containing the photon three-momentum and the polarization vector, and is relevant only in experiments with polarized beam or target. For unpolarized data, one can still define ϕ by pointing the polarization vector in an arbitrary direction, resulting in a ϕ-independent cross section. Other possible choices for variables are M_pπ^+ (invariant mass of the proton-π^+ pair), t_π^- (momentum transferred between photon and π^-), t (momentum transferred between target and recoil protons), or cosθ (cosine of the angle between target and recoil protons in the CM frame). Multidimensional analyses are becoming standard, albeit computationally difficult, in modern high statistics experiments <cit.>. However, some specific reactions can suffer from limited statistics. In particular, the direct extraction of π^+ π^- p photoproduction events at a given W value, on a 5D grid (or 4D, if integrated over the angle ϕ) with a bin size acceptable for physics analyses, is quite challenging. Even the highest statistics π^+ π^- p photoproduction sample collected with CLAS <cit.> results in a limited number of counts in the 4D cells (typically <10 events per cell). In Ref. <cit.>, theoretical curves were fitted to the marginal 1D distributions, determined by integrating the acceptance- and efficiency-corrected 5D distribution over the remaining four variables. This procedure largely washes out the correlations present in the original data, leading to a significant loss of relevant information contained in the joint distribution. In this paper we aim to overcome this problem with ML techniques. To illustrate this, in Fig. <ref> we show two examples of 2D distributions and their 1D projections, as measured in CLAS experiment without efficiency corrections <cit.>. From these distributions one immediately sees the presence of intermediate resonances that appear as enhancements in the invariant mass of the system in which they decay. For example, the band at M^2_p π^+≃ 1.5 GeV^2 corresponds to the Δ(1232) baryon resonance, which appears as an intermediate unstable state in the reaction γ p →Δ^++π^- → p π^+ π^-. The band centered at M^2_π^+ π^-≃ 0.6 GeV^2 corresponds instead to the ρ(770) meson resonance, in the reaction γ p → p ρ^0 → p π^+π^-. The two resonances are clearly visible as bumps in the respective 1D projections. Looking at 1D projections only, one can easily miss the presence of a resonance if the relevant invariant mass distribution is not explicitly considered. This is an example of loss of information that is contained in correlations. Moreover, because of quantum interference, the production of ρ^0 and Δ^++ are not independent processes, and it is impossible to associate one event exclusively with either process. This interference appears in the correlations between the invariant masses, and can be partially lost in the 1D projections. §.§ Two-pion photoproduction with CLAS The CLAS spectrometer in Hall B at Jefferson Lab was based on a ∼ 1.25 T toroidal magnet which bends charged particles produced in the hadronic interaction along the polar angles θ_lab (the z-axis along the photon beam), while the preserving azimuthal angles ϕ_lab. The polarity of the field determined if positive/negative charges were bent towards/away from the beam line into the acceptance of the detector. A system of three layers of multi-wires drift chambers <cit.> provided momentum information with the resolution, σ_p/p, ranging from 0.5 to 1.0%, depending on the kinematics. Charged hadron identification was obtained by time-of-flight scintillators <cit.>. Photoproduction experiments were conducted with a bremsstrahlung photon beam produced by the CEBAF continuous electron beam impinging on 8 × 10^-5 radiation lengths thickness gold foil. A bremsstrahlung tagging system <cit.> with a photon energy resolution of 0.1% was used to measure the photon energy in each recorded event. The target cell was a 4 cm in diameter and 40 cm long Mylar cylinder, filled with liquid hydrogen at 20.4 K. The experimental conditions reported in this paper, and simulated in the framework described in Sec. <ref>, correspond to the experiment that ran in CLAS in 2004. During the experiment, the torus field was such that positive particles were bent away from the beam line. The detector geometrical acceptance for each positive particle in the relevant kinematic region was about 40% and somewhat less for negative particles (bent towards the beamline and out of the detector acceptance). The primary electron beam energy was 4.02 GeV, providing a tagged photon beam in the energy range from 0.8 to 3.8 GeV. For this analysis we focus on the highest energy region, 3.0–3.8 GeV, that was analyzed in Ref. <cit.>. The exclusive reaction γ p →π^+ π^-p was isolated by detecting the proton and the π^+ in the CLAS spectrometer, while the π^- was reconstructed from detected particle four-momenta using the missing-mass technique. In this way, the exclusivity of the reaction was ensured, keeping the contamination from the multipion background to a minimum level. Only events within a fiducial volume were retained in the analysis, in order to avoid the regions at the edge of the detector acceptance. Cuts were defined on the minimum proton momentum and the hadron minimum and maximum polar angle. After all the cuts, approximately 40 M events were identified as produced in exclusive two-pion photoproduction, making the dataset the largest statistics sample of this reaction in the above photon energy range. Details of the analysis can be found in Ref. <cit.>. § MC SIMULATION FRAMEWORKS In this section we describe the simulation frameworks used to perform the closure test. Pseudodata corresponding to two-pion photoproduction in the kinematics of the experiment were generated using two different MC event generators that produce the four-momenta of the final state particles. A realistic GEANT simulation was used to reproduce the finite resolution and limited acceptance of the CLAS detector. Detector effects were assessed with a first MC generator based on a pure phase-space distribution. To perform the closure test, we deployed a second MC generator based on a realistic physics model. The use of two different MC generators minimizes the model dependence in the extraction of the original information and mimics a real situation, where the detector effects are estimated with simulations that are similar but not identical to the experimental distributions. §.§ Two-pion event generators The two MC generators simulate the interaction of an incoming unpolarized photon beam with a bremsstrahlung spectrum, in the energy range 3.0–3.8 GeV, with a target proton at rest. With the choice of variables described in Sec. <ref>, the yields are proportional to the differential cross section, and thus to the squared of the production amplitude A summed over polarizations, ^5 σ/M^2_pπ^-M^2_π^+π^-t_π^+α_[π^+ p][π^-p']ϕ ∝[(W^2 - (M_pπ^- + m_π)^2)(W^2 - (M_pπ^- - m_π)^2)]^-1/2 ×∑_pol|A (M^2_pπ^+,M^2_π^+π^-,cosθ_π^-,α_[π^+ p][π^-p'])|^2 . The first MC generator, referred to as phase space or PS-MC, distributes final state events according to the π^+ π^-p phase space. This corresponds to assuming that the production amplitude is a constant. This is clearly unrealistic since, as discussed above, two-pion photoproduction has a much more complicated structure. However, it has the advantage of being well-defined, agnostic to physics models, and distributes events uniformly across the full reaction kinematics. The 1D-projected PS-MC event distributions are shown in Fig. <ref>, while the 2D distributions are illustrated in Fig. <ref>. The second MC event generator, which we refer to as realistic or RE-MC, considers the amplitude squared as an incoherent sum of the three dominant intermediate resonances observed, γ p →(p ρ^0, Δ^++π^- , Δ^0π^+ )→π^+ π^-p, added to a ∼ 10% constant that mimics the nonresonant two-pion photoproduction contribution. Each process has been weighted with the corresponding contribution to the total cross section as reported in Ref. <cit.>. The angular distributions relative to resonance production are parametrized from measured differential cross sections reported in the same database. The decays ρ→ππ and Δ→ p π are described using the correct spin structure with the decay matrix elements detailed in Ref. <cit.>. The resulting 1D and 2D projections for events generated by RE-MC are shown in Figs. <ref> and <ref>, respectively. We note that this model neglects the interference terms between the intermediate resonances. Despite this, the resulting distribution provides a reasonable description of the experimental data, showing resonance structures in the invariant masses and the correct angular behavior of particles in the final states. §.§ CLAS detector simulation The CLAS detector response has been simulated using the standard GEANT Monte Carlo simulation package, GSIM, used by the CLAS Collaboration <cit.>. It consists of a central steering and control package that calls a number of independent detector geometry and response packages. A post-processing code (GSIM-Post-Processor or GPP) has been used to fine tune the GSIM output to match the tails of the experimental resolution and other effects, such as the detector's dead channels, not described by the idealized GEANT-based simulation. The GSIM output has been fed to the same reconstruction code, RECSIS, used to process experimental data. We will refer to REC or detector-level events to identify the set of pseudodata as processed by the detector simulation, while GEN or vertex-level will identify the `true' events as generated by the MC code. As reported in Sec. <ref>, the CLAS detector has a nonuniform acceptance, reduced in the azimuthal angle ϕ_lab (around the beam) by the presence of the six coils of the toroidal magnet, and in the polar angle θ_lab (with respect to the beam direction) by the limited area covered by the drift chambers, calorimeter and time-of-flight systems. A further limitation concerns the minimum accepted momenta of charged hadrons, due to the energy loss in materials crossed along the track and to the effect of the toroidal magnetic field that bends low-momentum particles out of the detector acceptance. The limited CLAS acceptance results in a reduced yield in REC with respect to GEN events, since not all generated events are reconstructed. The effect of the CLAS acceptance on the π^+ variables in the laboratory frame is shown in Fig. <ref>. As any detector, CLAS has finite resolution, which `smears' the measured kinematic variables resulting in a difference between REC and GEN, even when the event is accepted. The smearing affects the reconstructed three-momenta of any detected particle within the CLAS acceptance with a distortion depending on the three-momentum of the particle. Figure <ref> shows the resolution on the detected (REC) π^+ momentum and polar angle as a function of the `true' (GEN) momentum, along with the projections in 1D corresponding to the CLAS relative momentum and angular resolution. Fitting the two curves to a double Gaussian line, we obtained δ p / p ∼ 0.8% and δθ / θ∼ 0.5%. A similar smearing affects the kinematic variables of the detected proton. The resolution of the CLAS detector is sufficiently high so as to allow the use of the missing mass technique to identify the exclusive two-pion reaction against the multipion background. The technique uses knowledge of the initial state and of the detected particles to calculate the invariant mass of the undetected system to fulfill energy-momentum conservation, within detector resolution. If all particles are detected, the missing mass is zero. If a single particle is undetected, its mass appears as a peak in the missing mass spectrum. If two or more particles are lost, the missing mass of the system is unconstrained and does not peak, but rather distributes smoothly. The technique is only applicable if the experimental resolution is sufficient to disentangle the missing mass peak from this multiparticle background. Clearly, the more particles are detected, the lower is the resolution for the missing mass due the error propagation, limiting the validity of the technique to reactions with a small number of particles in the final state. When the missing particle has been identified, its four-momentum is determined by energy and momentum conservation, and the final state can be fully reconstructed. In two-pion photoproduction, the requirement of at most a single undetected particle corresponds to the following topologies (missing particle in parentheses): pπ^+(π^-), pπ^-(π^+), π^+π^-(p) and π^+π^-p (all three detected). Considering the CLAS acceptance, the yield of different topologies is quite different, with a ratio of (100 :37:30:35) for the respective topologies. Since the pπ^+(π^-) is by far the dominant contribution to REC data, we focus on this topology, although similar conclusions also hold for the others. Each topology is in one-to-one correspondence with different areas of the allowed phase space, and a combination of different topologies would therefore extend the kinematic coverage of the measurement, mitigating the effect of the limited detector acceptance. Figure <ref> shows the missing mass distribution of the pπ^+(π^-) topology. This exclusive final state is identified by selecting events with missing mass in the peak. Since these simulations only contain the two-pion final state, no multiparticle background populates the plot. The equivalent distribution for data shows a significant multipion background <cit.> that populates the positive side of the missing mass spectrum, and is rejected during the analysis to assure the reaction exclusivity. § GAN-BASED UNFOLDING METHODOLOGY GANs, a type of neural networks that have gained significant attention in recent years, are powerful generative models highly effective in generating high-quality, realistic data in various fields <cit.>. The architecture of a typical GAN involves a generator network that learns to produce data and a discriminator network that learns to differentiate between the generated and reference data. The two networks are trained alternately in a competitive setting, where the generator tries to produce more realistic data to fool the discriminator, and the discriminator tries to correctly identify the generated data. This iterative process leads to the generation of data that are progressively more realistic, with the ultimate goal of producing synthetic data that are indistinguishable from the reference data. GANs have been widely applied in many domains, such as image synthesis <cit.>, text generation <cit.>, music composition <cit.>, and videos <cit.>, and have demonstrated impressive results. In image synthesis, GANs have been used to generate highly realistic images visually indistinguishable from real images, which has numerous practical applications in fields such as gaming, film, and art. Successfully training GANs can be notoriously challenging, however. Numerous GAN models experience significant issues, such as mode collapse, non-convergence, model parameter oscillation, destabilization, vanishing gradients, and overfitting, resulting in an unbalanced training of the generator and discriminator <cit.>. In contrast to typical GAN applications, the success of a GAN-based event generator in nuclear and particle physics depends on its ability to accurately reproduce correlations among the momenta of the particles, which becomes increasingly challenging beyond two dimensions. Moreover, the multidimensional momentum distributions of events associated with nuclear and high-energy physics reactions, such as the two-pion photoproduction process considered in this work, exhibit highly complex patterns and range over orders of magnitude across the phase space. The task of developing an appropriate GAN architecture that is able to simultaneously reproduce all the correlations among particle momenta, and accurately reproduce multidimensional histograms, is therefore rather difficult. Machine learning event generators have gained prominence as efficient fast simulation tools in various scientific fields, including high-energy and nuclear physics <cit.>. Unlike traditional simulation methods that rely on a theoretical framework for the underlying reaction, machine learning event generators learn from large datasets and use this knowledge to produce new events with high fidelity. GANs have emerged as powerful tools in the field of fast simulation, where they learn to generate events that closely resemble reference data, capturing the underlying physics processes and their distributions <cit.>. Furthermore, GANs have been employed to address the challenge of simulating detector effects in fast simulation <cit.>. This application of GANs helps bridge the gap between simulated and reference data, enabling more realistic and precise simulations for experimental analyses. A comprehensive survey of existing ML-based event generators can be found in Ref. <cit.>. In this study, we employ the architectural framework of the Least Squares GAN, which involves substituting the cross entropy loss function in the discriminator component of a conventional GAN with a least square term. For further details, see Ref. <cit.>. In the following, we describe the GAN architecture used to generate the synthetic data that reproduce the γ p →π^+ π^-p RE-MC pseudodata. As mentioned above, two different GANs were developed and combined. The detector simulation GAN (DS-GAN) was trained on PS-MC pseudodata to learn the detector effects, and was later inserted between the generator and the discriminator of the unfolding GAN (UNF-GAN) to unfold the GEN vertex-level information from REC pseudodata. §.§ Detector simulation GAN (DS-GAN) In order to capture the detector effects, we have developed an ML-based detector simulation using a conditional GAN <cit.>, as illustrated in Fig. <ref>. Our approach involves training a conditional GAN generator to simulate the detector's smearing effect so that it generates synthetic REC detector-level events from input noise and PS-MC GEN events. The GEN PS-MC accepted events are passed through the GEANT chain to obtain REC pseudodata. As proposed by Bellagente et al. <cit.>, both the synthetic REC and REC pseudodata are “concatenated” with original GEN events and fed to the GAN discriminator as input to facilitate convergence. After successful training, the DS-GAN generator serves as the ML detector surrogate that will be integrated into the UNF-GAN architecture. Summarizing the model architecture of the DS-GAN, the generator, conditioned on accepted events (GEN), takes in as input a 100-dimensional array of random values with a mean of 0 and a standard deviation of 1. The generator network consists of five hidden layers, each with 128 neurons, using a leaky rectified linear unit (ReLU) activation function. The final hidden layer is connected to a four-neuron output layer, which uses a linear function to represent the generated features. At the end of the training, the DS-GAN generator learns how to convert the GEN accepted events into REC events, effectively mimicking the smearing due to the detector as described by GEANT. The discriminator is made of a neural network with five hidden dense layers. The first three layers have 256 neurons each, while the fourth has 128 neurons and the fifth has 32 neurons. A leaky ReLU activation function is used for all the layers. To prevent overfitting during training, a 5% dropout rate is implemented for each hidden layer. The last hidden layer is fully connected to a single-neuron output, activated by a linear function, where “1” indicates a true event and “0” is a fake event. The DS-GAN was trained using about 1M two-pion event samples for 80K adversarial epochs, with an epoch defined as one pass through the training dataset. Both the generator and discriminator were trained using the Adam optimizer <cit.> with a learning rate of 10^-5 and exponential decay rates for the moment estimates (β1 = 0.5, and β2 = 0.9). §.§ Unfolding GAN (UNF-GAN) The training process for the UNF-GAN is illustrated in Fig. <ref>, which depicts the variation of a typical GAN model structure consisting of a conditional generator and a discriminator. The generator takes as input the photon energy generated by the RE-MC, along with a 100-dimensional white noise vector centered at zero with a unit standard deviation. This combination of inputs allows the generator, implemented as a deep neural network, to transform the noise and photon energy into a minimal set of event features/variables that effectively describe the two-pion photoproduction reaction. To strike a balance between execution time and convergence, the generator network is designed with 7 hidden dense layers. The number of neurons in each layer follows the sequence: 16, 32, 64, 128, 256, 512, and 1024, all of which are activated by the ReLU function. The last hidden layer is fully connected to a 4-neuron output layer, activated by a linear function. This output layer represents the independent variables M^2_π^+π^-, M^2_pπ^-, t_π^+, and α_[π^+ p][π^-p'] that are specifically chosen to describe the reaction. The synthetic GEN event features, generated by the conditional GAN generator, are then fed into the DS-GAN to incorporate the detector effects, and then compared to REC pseudodata obtained by passing the GEN RE-MC pseudodata through GEANT. The training process involved utilizing approximately 400k two-pion event samples for a duration of around 200k adversarial epochs per UNF-GAN model. Consistent configuration parameters for the Adam optimizer were maintained, utilizing the same settings as employed for the DS-GAN. During the training, the generator and the discriminator engage in an adversarial competition, with both updating their parameters throughout the process. Eventually, the generator is able to generate synthetic REC samples that are indistinguishable from the REC pseudodata samples. This means that the discriminator's ability to correctly classify whether a sample is genuine or synthetic approximates random chance. §.§ Uncertainty quantification As neural networks become increasingly employed in physics analysis, it becomes crucial to accurately assess the reliability of ML predictions. The statistics of the synthetic samples can be made arbitrarily high, so that there is no need to consider a statistical uncertainty. However, it is important to quantify the systematic uncertainty related to the training procedure, and for this a bootstrap resampling technique was employed. For the DS-GAN, the procedure involved training a total of 20 neural networks independently from the beginning. Each one was trained on a different random sample set drawn from the original dataset with replacement, resulting in datasets of the same size but with potentially different observations. For the UNF-GAN a similar procedure was adopted, with 20 different networks trained independently using the same bootstrap resampling technique. Moreover, each of the 20 UNF-GANs used a different DS-GAN of the 20 discussed above. In this way, the systematic uncertainties associated with the DS- and UNF-GANs are effectively combined. While it is possible that using a higher number of bootstraps could potentially lead to more precise uncertainty estimates, we found that training 20 GANs provided reasonably stable and consistent results. It is important to note that the specific number of bootstraps can vary depending on the characteristics of the problem, available data, and desired level of uncertainty quantification. In this particular case, 20 bootstraps were deemed sufficient for accurately capturing and quantifying the uncertainties associated with the observables. Furthermore, changing the network architecture was not essential because the convergence we achieved, along with the estimated error and uncertainty quantification, clearly indicate that this architecture is capable of accurately reproducing the data without introducing further systematic uncertainties. § RESULTS In this section we now discuss the DS-GAN and UNF-GAN performance, comparing synthetic to the REC and GEN pseudodata. We use the nomenclature REC_SYN and GEN_SYN to indicate synthetic data at the detector and vertex levels, respectively. To visualize the comparison, we build marginal 1D and 2D histograms for some kinematic variables. To show that correlations are correctly accounted for, we also study the distribution of one variable in some slices of the other variables. Synthetic data are generated with the bootstrap procedure detailed in Sec. <ref>, so that the standard deviation σ_SYN corresponds to the systematic uncertainty. In all our results, the average μ_SYN is shown as a solid line, together with an error band of width ± 1σ_SYN, while pseudodata are represented by dots with their statistical uncertainty σ_pseudodata. To quantify the level of agreement between the synthetic data and pseudodata, we plot the pull for each bin, defined as pull = μ_SYN-μ_pseudodata/√(σ^2_SYN+σ^2_pseudodata), where μ_pseudodata denotes the mean of the pseudodata. §.§ DS-GAN The DS-GAN is trained on four independent variables: the invariant masses M_pπ^-^2 and M_π^+π^-^2, t_π^+, and the angle α_[π^+ p][π^-p']. The comparison between REC_SYN and pseudodata PS-MC REC distributions is shown Fig. <ref>. In Fig. <ref> the comparison is extended to other physics-relevant distributions not used in the training and derived from the four above-mentioned variables, namely M_pπ^+^2, t_π^-, t, and cosθ. The agreement, quantified by the pull distributions shown at the bottom of each plot, is remarkable, in both cases, with most of the points lying within 1σ. This indicates that the DS-GAN is indeed able to learn the CLAS detector effects. Bidimensional distributions from MC and synthetic data are shown in Fig. <ref>. The π^+ absolute momentum resolution as obtained from pseudodata (REC-GEN) is shown in Fig. <ref>, along with synthetic data (REC_SYN-GEN). The two distributions are in very good agreement, indicating that synthetic data incorporate the correct resolution of the detector. Similar results hold for other kinematic variables of all particles. These comparisons demonstrate the ability of the DS-GAN to learn and reproduce detector effects in a multidimensional space, even in the tails of the distributions. This confirms that generative models can indeed be used as an efficient and fast proxy for more computational expensive GEANT simulations <cit.>. §.§ UNF-GAN As described in Sec <ref>, the final step in the closure test is to use REC RE-MC pseudodata to train the UNF-GAN, extract the GEN_SYN distributions, and compare them with GEN pseudodata. Figure <ref> shows the comparison between GEN and GEN_SYN for the four training variables. We can see a very good agreement between pseudo- and synthetic data at the vertex level, despite the fact that the UNF-GAN was trained on detector-level pseudodata. This clearly demonstrates the success of the unfolding procedure. Moreover, the vast majority of pulls lie within ± 1σ, indicating that the uncertainty quantification is appropriate. The key point of this closure test is to demonstrate that synthetic data maintain the correlations of the original pseudodata. We checked that this is indeed the case: in Fig. <ref> we display an example of 2D distributions featuring strong correlations. We give a quantitative determination of the success of the procedure by calculating the pulls, shown in Fig. <ref>, which turn out to be normally distributed, as expected. The good agreement and preservation of correlations remains valid for derived kinematic variables that were not used for training. Examples are shown in Fig. <ref> for invariant and CM variables, and in Fig. <ref> for variables in the lab frame. It is worth noting that in the lab frame the GEN pseudodata exhibits sharp features due to detector acceptance. These features cannot be properly captured by the GANs, which is trained on invariant variables. Even so, this results in a ≲ 2σ local discrepancy in the 1D projections. If better agreement is needed, lab frame variables can be added to the training set. Finally, in Fig. <ref> we compare 1D distributions in a given bin of the other variables. The success of this test shows that correlations underlying the multidifferential cross section are correctly reproduced in the synthetic datasets. § CONCLUSIONS AND OUTLOOK One of the central results of this paper is the demonstration that a generative adversarial network can be used to reproduce a realistic multibody physics reaction. As a case study, we have used two-pion photoproduction in the kinematics of the Jefferson Lab CLAS experiment. This process represents an ideal test case, where several baryon and meson production mechanisms overlap, resulting in rich and complex observable distributions. The nonuniformity of the CLAS detector response further adds complication to the challenge. In order to validate the framework, we have performed a closure test to demonstrate that synthetic data correctly reproduce the multidifferential cross section preserving correlations between kinematic variables. Detector effects were also correctly unfolded by the procedure. We deployed two MC event generators, one distributed according to pure phase space, and the other incorporating a realistic physics model. Generated pseudodata were fed into a GEANT-based detector model to realistically take into account the detector response. Phase-space pseudodata were used to train a GAN-based proxy to learn the detector effects, and realistic pseudodata were then used to train the unfolding GAN and generate synthetic copies of MC events. The uncertainty quantification of the entire procedure was assessed by combining a bootstrap for the two NNs. Comparison between the true and GAN-generated samples demonstrated that, within the quoted systematic error, the NN is able to reproduce training and derived kinematic variables, as well as to unfold the detector effects in multiple dimensions. This work represents a first step towards a full AI-supported analysis of CLAS exclusive two-pion photoproduction data. It demonstrates that the same analysis framework, trained on CLAS data, can provide a synthetic copy of the experimental data, preserving correlations between kinematic variables and unfolding the detector effects. Physics interpretation in term of production mechanisms, separating different contributions and extracting resonance parameters from the unfolded data, will follow. An extension of this framework to include the different topologies and extrapolating in a controlled (albeit model-dependent) outside detector acceptance is also in progress. We thank J. Qiu for helpful discussions. This work was supported by the Jefferson Lab LDRD project No. LDRD19-13 and No. LDRD20-18, and in part by the U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Lab. ANHB is supported by the DFG through the Research Unit FOR 2926 (project number 409651613). TA was supported by a Ph.D. scholarship from Al-Baha University, Saudi Arabia. The work of NS was supported by the DOE, Office of Science, Office of Nuclear Physics in the Early Career Program. This work contributes to the aims of the U.S. Department of Energy ExoHad Topical Collaboration, contract DE-SC0023598. apsrev4-1
http://arxiv.org/abs/2307.05882v1
20230712030615
Knowledge-Driven Resource Allocation for D2D Networks: A WMMSE Unrolled Graph Neural Network Approach
[ "Hao Yang", "Nan Cheng", "Ruijin Sun", "Wei Quan", "Rong Chai", "Khalid Aldubaikhy", "Abdullah Alqasir", "Xuemin Shen" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Knowledge-Driven Resource Allocation for D2D Networks: A WMMSE Unrolled Graph Neural Network Approach Hao Yang, Student Member, IEEE, Nan Cheng, Member, IEEE, Ruijin Sun, Member, IEEE, Wei Quan, Member, IEEE, Rong Chai, Senior Member, IEEE, Khalid Aldubaikhy, Member, IEEE, Abdullah Alqasir, Member, IEEE, and Xuemin (Sherman) Shen, Fellow, IEEE Hao Yang, Nan Cheng, and Ruijin Sun are with the State Key Lab. of ISN and School of Telecommunications Engineering, Xidian University, Xi’an 710071, China (e-mail: [email protected]; [email protected]; [email protected]). Wei Quan is with School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China (e-mail: [email protected]). Rong Chai is with School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China (e-mail: [email protected]). K. Aldubaikhy and A. Alqasir are with the Department of Electrical Engineering, College of Engineering, Qassim University, Qassim, Saudi Arabia (e-mail: {khalid, a.alqasir}@qec.edu.sa). Xuemin (Sherman) Shen is with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, N2L 3G1, Canada (e-mail: [email protected]). Corresponding Author: Ruijin Sun. August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= This paper proposes an novel knowledge-driven approach for resource allocation in device-to-device (D2D) networks using a graph neural network (GNN) architecture. To meet the millisecond-level timeliness and scalability required for the dynamic network environment, our proposed approach incorporates the deep unrolling of the weighted minimum mean square error (WMMSE) algorithm, referred to as domain knowledge, into GNN, thereby reducing computational delay and sample complexity while adapting to various data distributions. Specifically, the aggregation and update functions in the GNN architecture are designed by utilizing the summation and power calculation components of the WMMSE algorithm, which leads to improved model generalization and interpretabiliy. Theoretical analysis of the proposed approach reveals its capability to simplify intricate end-to-end mappings and diminish the model exploration space, resulting in increased network expressiveness and enhanced optimization performance. Simulation results demonstrate the robustness, scalability, and strong performance of the proposed knowledge-driven resource allocation approach across diverse communication topologies without retraining. Our findings contribute to the development of efficient and scalable wireless resource management solutions for distributed and dynamic networks with strict latency requirements. Deep unrolling, GNN, knowledge-driven resource allocation, WMMSE algorithm, wireless communication § INTRODUCTION In the era of 6G, mobile communication networks are envisioned to provide a wide variety of services and applications, from data-intensive services such as extended reality (XR), reliable and low-latency services such as autonomous driving and remote surgery, to the soaring intelligent services such as metaverse and ChatGPT <cit.>. Furthermore, 6G networks are becoming increasingly complex and dynamic as the emergence and fast development of space-air-ground integrated networks (SAGINs) significantly enlarge the network scale and require efficient management of multi-dimensional resources<cit.>. The complexity poses a significant challenge to wireless network management, such as resource allocation schemes and task scheduling, to fulfill the service requirements, especially delay-sensitive and reliability services, where a fault or delayed decision may lead to fatal outcomes. Therefore, it is critical to design efficient, responsive, and scalable wireless network management schemes in 6G networks. Wireless resource allocation plays a pivotal role in network management to allocate spatial and temporal wireless resources for certain goals, such as maximizing the transmission rate or minimizing the transmission delay or energy consumption. A plethora of model-based iterative algorithms, such as iterative water-filling type algorithms <cit.>, WMMSE algorithms <cit.>, and successive convex approximation algorithms <cit.>, have been proposed based on the convex optimization theory to address the wireless resource allocation problem. These algorithms have successfully solved classical resource allocation problems, commonly with small network scale and low-level network dynamics. However, as the network scale increases, the high computational complexity associated with multiple iterations of these algorithms can hardly meet the stringent milliseconds-level service requirements. Due to the efficient real-time computational capabilities, deep learning techniques have found application in diverse areas of wireless communication systems<cit.>, such as UAV-assisted IoT applications<cit.>,spectrum Sharing<cit.>, resource management<cit.>, and mobile computing offloading<cit.>. Ye et al. employed deep neural networks (DNNs) for channel estimation and signal detection, achieving efficient handling of channel distortion<cit.>. He et al. investigated the utilization of convolutional neural network (CNN)-based architectures for channel estimation in beamspace millimeter-wave massive multiple-input-multiple-output (MIMO) systems, surpassing the most advanced compressed sensing-based algorithms. Tang and Wong utilized bidirectional long short-term memory (LSTM) in mobile edge computing systems to handle user scheduling problems in ad hoc networks, effectively minimizing the average age of information <cit.>. To solve the resource allocation problem for weighted sum rate maximization, multi-layer perceptrons (MLP) <cit.> and CNN <cit.> are employed to approximate the WMMSE algorithm, using outputs of algorithms as labels and reducing computational complexity. In <cit.>, the objective function, i.e., the weighted sum rate, is regarded as the loss function, achieving better performance. These studies presented exhibit impressive performance and low inference complexity. However, applying them effectively to radio resource management for an arbitrary number of users faces a significant challenge. Commonly utilized neural networks, including DNN, CNNs, recurrent neural networks, and attention-based Transformer models, are not readily scalable for an arbitrary number of users, as their input and output dimensions must remain constant. Therefore, designing scalable neural network architectures is crucial for effectively managing wireless resources, given the dynamic fluctuations in user numbers within mobile applications. In this context, a graph structure with expandable connecting nodes is more suitable for capturing the dynamic characteristics of wireless networks. By incorporating such graph structure into neural networks, GNNs are envisioned as a potential solution to realize scalable resource allocation<cit.>. A random edge graph neural network (REGNN) is proposed to enhance scalability and generalization for optimal power control in interference channels <cit.>. Addressing the limitations of REGNN in heterogeneous agents and multi-antenna systems, the interference graph convolutional network (IGCNet) is proposed in <cit.>. Furthermore, a message-passing graph neural network (MPGNN) is presented for tackling large-scale wireless resource management problems, such as beamforming, user association, and channel estimation<cit.>. The authors established the equivalence between MPGNN and distributed optimization algorithms, showcasing its performance and generalization capabilities. While GNNs demonstrate scalability, their intrinsic learning approach primarily relies on statistical distributions with poor interpretability, leading to struggles to accommodate varying distributions and necessitating a large amount of training data for a particular distribution. In the context of radio resource management, collecting identical distribution training data is time-consuming and costly, and the dynamic nature of wireless networks causes a dataset shift that degrades model performance. Unlike deep learning, model-based iterative algorithms can consistently achieve solutions with theoretical performance guarantees. Integrating domain knowledge in model-based algorithms and neural networks, known as knowledge-driven methods, can simplify the architecture of machine learning systems, decrease training overhead, enhance the interpretability of decisions, and increase their practical utility <cit.>. Deep unrolling<cit.> provides an effective solution for integrating domain knowledge with iterative algorithms. The main idea is to design neural networks by leveraging the structure of classical iterative algorithms, incorporating the iterative structure of the algorithm into each layer of the network. This approach treats network layer as an iteration in the original iterative optimization algorithm and learns the network parameters from the data. Deep unrolling combines the benefits of data-driven learning with the domain knowledge embedded in the iterative algorithm, resulting in improved performance and generalization capabilities. In the field of image processing, deep unrolling has successfully addressed several challenging problems, including image restoration<cit.>, deep image deblurring<cit.>, and image super-resolution<cit.>. In the field of wireless communications, deep unrolling projected gradient descent (PGD) algorithm into a neural network has shown better accuracy with lower and more flexible computational complexity in MIMO detection problems <cit.>. A low-complexity deep neural network-based MIMO detector was proposed using the multipliers algorithm's deep unrolling alternating direction approach<cit.>. In <cit.>, the original iterative shrinkage thresholding algorithm is transformed into an unrolled RNN, maintaining the robustness of the algorithm and improving estimation accuracy. The iterative WMMSE algorithm is unrolled into a layer-by-layer CNN structure, introducing trainable parameters to replace the high-complexity operations in forward propagation, reducing computational complexity for efficient performance and enhancing neural network generalization <cit.>. In addition to unrolling model-based iterative algorithms such as DNN and RNN, unrolled GNN has the advantages of scalability and interpretability. A deep unrolling architecture based on GNN is proposed in <cit.>, which only learns key parameters of the WMMSE algorithm with GNN without unrolling iterations in WMMSE algorithm as layers in GNN. As demonstrated in <cit.>, the alignment of the GNN architecture with the algorithm may potentially enhance the representation of GNN and thus increase the sample complexity. Therefore, the investigation of GNN unrolling with accurate alignment between GNN layers and algorithm iterations is required to improve the performance further. In this paper, we propose a novel knowledge-driven GNN architecture-based resource allocation approach for D2D networks, guided by the WMMSE algorithm's unrolling, aiming to manage radio resources in D2D networks. Specifically, to align one iteration of the WMMSE algorithm with a one layer GNN, message passing and aggregation functions are designed based on the unrolling WMMSE algorithm, which utilize two cascaded GNN modules respectively for hierarchical feature extraction and node update. By adopting the structure and domain knowledge of the WMMSE algorithm, our proposed approach retains the algorithm's robustness while effectively reducing its computational delay, decreasing the sample complexity of the neural network, and adapting to various data distributions. The main contributions of the paper can be summarized as follows. * We propose a novel GNN architecture guided by a deep unrolling WMMSE algorithm, named UWGNN. This approach leverages the summation part of the WMMSE algorithm to design the aggregation function in GNN while adopts the power calculation formula of a single node to design the update function of GNN. * We conduct a theoretical analysis to demonstrate the validity of deep unrolling in the UWGNN, where layers in GNN are alignment with iterations in WMMSE algorithm. The deep unrolling model is scrutinized from two aspects: network mapping and the size of network exploration space. Through this analysis, it is concluded that deep unrolling make the mapping relationship of the model more accurate and reduce the exploration space of the model. * Our proposed UWGNN shows strong performance, robustness, and scalability through extensive simulations. Moreover, the architecture exhibits excellent generalization performance when dealing with diverse data distributions and communication topologies without necessitating retraining. Experiments show that the approach enables on-demand reduction of delay in communication network resource allocation. The remainder of this paper is organized as follows. Section II introduces the communication model and the formulae for resource allocation problems. In Section III, we present the WMMSE algorithm along with our proposed unrolling network architecture. In Section IV, we propose a theoretical hypothesis for the validity of the unrolling technique. In Section V, we demonstrate the effectiveness of our proposed approach through numerical experiments. Finally, Section VI summarizes and concludes the paper. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ System Model We consider a D2D scenario consisting of N single-antenna transceiver pairs. Let p_i denote the transmission power that transmitter i uses to send a baseband signal s_i to receiver i. That is, the transmission signal x_i=√(p_i) s_i. Then, the received signal at receiver i is y_i=h_iix_i+∑_j=1,j≠ i^Nh_ijx_j+n_i, ∀ i, where h_ii∈ℝ represents direct channel between transmitter i and receivers i, h_ij∈ℝ with i ≠ j interference channel from transmitter j to receiver i, and n_i∈ℝ denotes the additive noise following the complex Gaussian distribution 𝒞𝒩(0, σ^2). Based on the receive threshold equalization u_i, the signal recovered by receiver i can be obtained as ŝ_i=u_i y_i. Assuming that the signals of different users are independent of each other and receiver noises, the signal-to-interference-plus-noise ratio (SINR) of receiver i is expressed as SINR_i=|h_ii|^2 p_i/∑_j≠ i^N|h_ij|^2 p_j+σ ^2, ∀ i, where p_i represents the power of transmitter i, and 0≤ p_i≤ p_max with p_max denoting as the max transmission power of the transmitter. Our objective is to maximize the weighted sum rate by optimizing the transmission power, formulated as max_𝐩 ⁡ ∑_i=1^Nλ_ilog_2 (1+|h_ii|^2 p_i/∑_j≠ i^N|h_ij|^2 p_j+σ ^2), s. t. 0≤ p_i≤ p_max, ∀ i, where the weight λ_i represents the priority of transmitter i in the sum rate problem, and the power vector is expressed as 𝐩=[p_1,…,p_N]. §.§ WMMSE Algorithm The non-convex nature of Problem (<ref>) arises from the objective function. Many iterative algorithms have been proposed to solve it effectively, of which the WMMSE algorithm<cit.> is the most classical one. The main idea of the algorithm is to equivalently transform the weighted sum-rate maximization problem into a problem of minimizing a weighted sum of mean squared errors (MSE), as follows: ⁡min_𝐮,𝐯,𝐰 ∑_i=1^Nλ_i(w_i e_i-log w_i) s. t. 0≤v_i^2≤ p_max, ∀ i, where w_i≥ 0 is an introduced auxiliary variables indicating the weight for transmitter i. v_i=√(p_i), and 𝐮=[u_1,…,u_N], 𝐯=[v_1,…,v_N], 𝐰 =[w_1,…,w_N], respectively. e_i≜𝔼_s, n[(ŝ_i-s_i)^2] is the MSE covariance of the transmission signal s_i and the recovery signal ŝ_i, represented as e_i=(1-u_ih_iiv_i)^2 +∑_j ≠ i(u_ih_ijv_j)^2 +σ^2u_i^2, ∀ i. It has been demonstrated in <cit.> that the WMMSE problem presented in (<ref>) is equivalent to the problem of maximizing the sum-rate as depicted in (<ref>), with both problems sharing an identical optimal solution denoted as v_i. Subsequently, the weighted sum-MSE minimization problem is decomposed into three separate optimization subproblems, each of which can be solved iteratively. Since the subproblems associated with the optimization variable vectors {𝐮,𝐯,𝐰} are convex in nature, the algorithm utilizes a block coordinate descent approach to solve the WMMSE problem in (<ref>). More specifically, by sequentially fixing two of the three variables {u_i,w_i,v_i} and simultaneously updating the third variable, the WMMSE formula is as follows: u_i^(k) = h_iiv_i^(k-1)/σ ^2+∑_j^ h_ij^2 v_j^(k-1)v_j^(k-1), ∀ i w_i^(k) = 1/1-u_i^(k)h_iiv_i^(k-1), ∀ i v_i^(k) = λ_iu_i^(k)h_iiw_i^(k)/∑_j^ h_ji^2 u_j^(k)u_j^(k)w_j^(k), ∀ i where k=1,...,K represents the number of iterations. The detailed WMMSE algorithm is outlined in Algorithm 1. Although the WMMSE algorithm has demonstrated high performance in various wireless communication systems, some of its shortcomings limit its practical application. Firstly, the algorithm is prone to get trapped in local optima. Additionally, the computational time required for the WMMSE algorithm to converge is significant, particularly in large-scale networks. § NEURAL NETWORK ARCHITECTURE DESIGN USING WMMSE ALGORITHM To enhance online computational efficiency while preserving the interpretability of the WMMSE algorithm, we propose a knowledge-driven GNN approach for transmission power allocation in D2D networks. Our proposed method incorporates the unrolled WMMSE algorithm as the message aggregation and combination functions within the GNN. In what follows, we first briefly introduce GNN and unrolling technique, then present the knowledge-driven GNN. §.§ Preliminaries §.§.§ Graph Neural Networks GNNs were initially designed to process non-Euclidean structured graphs data <cit.>. Unlike traditional neural networks that operate on a fixed grid of inputs, GNNs can handle data with arbitrary connectivity, making them well-suited for tasks such as node classification, graph classification, and clustering detection. Particularly, GNNs operate by iteratively passing messages between nodes in the graph, updating the node representations based on the received information from neighboring nodes. This process can be thought of as a form of message passing, where each node can aggregate information from its local neighborhood and integrate it into its representation. The aggregation function is primarily utilized to consolidate the neighborhood features of nodes from their neighboring nodes and connected edges. In contrast, the combined function is responsible for updating the current node features based on the previous iteration node features and the neighborhood features. Formally, the aggregate and combine rules of the k-th layer at node i in GNNs are respectively expressed as α_i^(k) =AGGREGATE^(k)({β_j^(k-1): j ∈𝒩(i)}), β_i^(k) =COMBINE^(k)(β_i^(k-1), α_i^(k)) where β_i^(k) represents the feature vector of node i at either the k-th layer or after the k-th iteration. 𝒩(i) is the set of neighbor nodes of i, and α_i^(k) is an intermediate variable. §.§.§ Algorithm Unrolling Algorithm unrolling, also referred to as deep unrolling or unfolding, represents a technique that bridges the gap between deep learning and traditional iterative models, enabling the amalgamation of domain knowledge and data-driven learning. The fundamental concept of deep unrolling is transforming an iterative inference algorithm into a hierarchical structure that mimics a neural network. Each layer of the neural network corresponds to each iteration of the algorithm. Gregor and LeCun proposed deep unrolling seminal work <cit.>, which has been used to connect various iterative algorithms, such as those used in sparse coding, to diverse neural network architectures. It is possible to unfold an N-step iterative inference algorithm into an N-layer neural network with trainable parameters. This aims to enhance the model performance by leveraging a computationally-lighter neural network. Unrolled networks boast high parameter efficiency and require less training data than popular neural networks. Therefore, this approach efficiently counters the lack of interpretability normally found in traditional neural networks. This approach provides a systematic link between traditional iterative algorithms and deep neural networks, leading to efficient, interpretable, and high-performance network architectures. §.§ Graph Representation of the Weighted Sum Rate Maximization Problem Before presenting the knowledge-driven neural network architecture, we considered the D2D network as a directed graph with node and edge features. As shown in Fig. <ref>, we consider a pair of D2D communication users as a node in the graph and the interference link between transmitter j and receiver i as an edge between node j and node i. Typically, node features include attributes such as node labels, node degrees, and node positions, while edge features may include attributes such as edge weights, edge types, and edge directions. For problem (<ref>), node features contain the weighted factor λ_i, channel gain h_ii between D2D pair, transmission power v_i of transmitteri, and resource allocation intermediate variables u_i, w_i, etc, whereas edge feature includes the channel gain of the interfering channel h_ij. The modeled wireless channel directed graph is mathematically represented as G=(V,E), where V is the set of nodes, E is the set of edges. We denote the feature of node i by notation vector 𝐳_i, represented as 𝐳_i=[ λ_i,h_ii,v_i,𝐮_𝐢,𝐰_𝐢]^⊤, 𝐳_i ∈ℂ ^(3+d_u+d_w)× 1 where λ_i, h_ii, v_i are one-dimensional variables, u_i and w_i are a d_u-dimension vector and a d_w-dimension vector, respectively, expanded by the one-dimensional variables, to enhance the feature extraction capability of GNNs. To improve the feature extraction capability of GNN, we propose expanding the one-dimensional variable u_i and w_i into a d_u-dimension vector and a d_w-dimension vector, respectively. The node feature matrix 𝐙 can be expressed as 𝐙= [𝐳_1,…,𝐳_N]. The edges adjacency feature matrix 𝐀∈ℂ ^N×N is given by 𝐀_(i,j ) = { 0 , if{i,j }∉ E h_ij, otherwise. By defining nodes feature matrix 𝐙 and edges feature matrix 𝐀, the considered D2D scenario is converted as a directed graph. Based on this, we will develop an effective algorithmic knowledge inspired graph neural network architecture. §.§ Proposed Knowledge-Driven GNN Architecture Although the non-Euclidean data structure of GNN has the advantage of handling communication topological information compared to other deep learning models, its internal structure design is also essential. The aggregate and combined functions of most GNNs are typically designed with consideration of graph data structures and features, such as permutation invariance and self-attentiveness mechanisms, etc. Although these data-driven design approaches have excellent end-to-end nonlinear mapping performance, they are usually interpretable to a limited extent as black box models. Inspired by the deep unrolling technique, we propose a novel GNN architecture based on the unrolling WMMSE algorithm, named UWGNN. Specifically, in the message passing process of the GNN, we design the message passing and aggregation functions by utilizing the sum operation of neighborhood information in the denominators of (<ref>) and (<ref>) in the WMMSE algorithm. Instead of aggregating all node and edge features into the aggregation function at once, we selectively input the features of the aggregation function based on the sum operation. Typically, a single-layer GNN involves only one round of message passing and aggregation. In contrast, as shown in (<ref>) and (<ref>), the WMMSE algorithm requires two rounds of aggregation for neighbor messages within one cycle, and the information dimensions of the two processes are different. Therefore, we design two different aggregation functions corresponding to two GNN modules in cascade. For the node feature update, we design three update steps based on the WMMSE algorithm, corresponding to the updates of u_i, w_i, and v_i. Similar to the idea of the algorithm to fix one variable and update the remaining variables, our GNN architecture first updates u_i, then updates w_i, and finally passes the new u_i and w_i to updates v_i. This hierarchical feature extraction and node update strategy, inspired by the algorithm, is more conducive to the GNN deep cognition of data features. The network architecture is depicted in Figure <ref>. The first GNN module is used to unroll the equation of the algorithm that calculates u_i and w_i. In (<ref>), the feature of neighbor power v_j and path loss h_ij is extracted by MLP_1. To cope with the lack of channel information, we adopt the MAX pooling operation to aggregate the neighborhood information α_u_i. Feeding α_u_i and node feature v_i,h_ii into the the combination function MLP_2 in (<ref>), we can obtain the u_i information of the nodes. Similarly, w_i is calculated by MLP_3 in (<ref>). α_u_i^(k) = MAX{MLP_1(h_ij, v_j^(k-1) )}, j∈𝒩(i), u_i^(k) = MLP_2 ( h_ii, v_i^(k-1), α_u_j^(k) ) w_i^(k) = MLP_3 ( h_ii, v_i^(k-1), u_i^(k) ), We utilize the second GNN to unroll equation (<ref>), which serves as a basis for computing the node power v_i. In order to compute the denominator part of (<ref>), we employ MLP_4 as the aggregation function for the second GNN, which enables us to gather information on neighboring nodes u_j, w_j, and the edges h_ji feature. The neighborhood information α_v_i is aggregated by MAX pooling operation. Finally, we use MLP_5 in (<ref>) as the combining function to update the node power information v_i. α_v_i^(k) = MAX{MLP_4(h_ji, u_j^(k), w_j^(k) )}, j∈𝒩(i), v_i^(k) =γ(MLP_5( λ_i,h_ii, u_i^(k),w_i^(k),α_v_i^(k))) , where γ (x) is a sigmoid function to constrain output power, i.e., γ(x)=1/1+e^-x. §.§ The Training Approach The selection of the loss function for neural networks has a significant impact on the overall performance of the network. In supervised learning, the loss function is computed using the labels v̂_i derived from the WMMSE algorithm. However, in realistic scenarios, communication channel and network topology change quickly, and ground truth labels required for training are difficult to collect within a limited period of time. And it is demonstrated in <cit.> that the algorithm's output constrains the upper limit of convergence performance. The unsupervised loss function in (<ref>) uses problem formation to optimize the neural network. Recent research on <cit.> <cit.> has shown that unsupervised training approaches outperform the WMMSE algorithm. Therefore, for unsupervised training of our model, we adopt the optimization objective as the loss function in (<ref>). ℒ_U(θ )=-𝔼(∑_i=1^Nλ_ilog_2(1+|h_iiv_i(θ )|^2 /∑_i≠ j^|h_jiv_j(θ )|^2 +σ^2 )), where θ is the learnable parameter of the neural network. § THEORETICAL ASSUMPTIONS ABOUT VALIDITY OF GRAPH NEURAL NETWORK UNROLLING APPROACH In this section, we discuss the validity of deep unrolling techniques for GNNs with respect to network mapping and the size of the model exploration space. Traditional neural networks employ the back-propagation algorithm to determine the steepest gradient descent direction for updating network parameters. The acquired network mapping is optimal for solving optimization problems under a specific data distribution, which can be conceptualized as end-to-end matrix multiplication. However, the learned mapping is highly dependent on the data distribution. §.§ Constrained Neural Network Mapping As stated in <cit.>, a good algorithm alignment means that all algorithm steps of an iterative algorithm are easy to learn. To facilitate learning complex end-to-end mappings, the unrolling approach decomposes the one-iteration process of the algorithm into smaller, simpler subtasks in a hierarchical fashion. Each module learns a portion of the mapping relationship in the optimization algorithm, thereby reducing the overall complexity of model training. For instance, the Newton iterative approach in (<ref>) necessitates that an end-to-end neural network learn the entire mapping of G ( x ), containing multiple steps. When f(x) is complex, learning the end-to-end mapping is challenging. By dividing the mapping into distinct g_i ( x ) modules in (<ref>), the unrolling approach can learn these mappings, including crucial parameters such as iteration steps during the iterative process. This technique effectively learns the algorithmic mapping and, through multi-level mapping, enhances the expressive network capability compared to one-step mapping. x_n+1 = x_n-f(x_n)/f^'(x_n) ⇔ G ( x ):x_n+1→ x_n ⇔ g_1 ( x ):x_n→ f(x_n) g_2 ( x ):x_n→ f^'(x_n) g_3 ( x ): ( f^'(x_n),f(x_n),x_n ) → x_n+1 §.§ Reduce the Model Exploration Space Furthermore, the input-output relationship between algorithm-designed modules constrains the mapping direction, and algorithm-based feature extraction aids in reducing the network exploration space. Building on the conclusions in <cit.> that deeper GNNs increase feature correlations, we examine the impact of input inter-feature correlations on network exploration. Employing the widely used Pearson correlation coefficient measure, <cit.> introduces the metric Corr(X) to quantify the correlation between all feature dimension pairs. Corr(𝐗)=1/d(d-1)∑_i≠ j^|φ (𝐗_ (:, i ), 𝐗_ (:, j ) )|, i, j ∈ [1,2,…,d], where 𝐗∈ℂ^N× d represents the node feature matrix with d being the number of features, 𝐗_ (:, i ) indicates the i-th column feature vector of 𝐗, and φ (x, y) is a popular Pearson correlation coefficient for assessing the correlation between learned dimensions in deep GNNs. We propose that the exploration space of a neural network can be viewed as a high-dimensional space, expanded orthogonally by independent input variables. Considering two one-dimensional features on a graph node, it is observed that when the two input features are independent, the exploration space can be characterized as a high-dimensional space ℝ^2, expanded orthogonally by two one-dimensional variables. In contrast, if the two features are identical, the exploration space is reduced from a high-dimensional, orthogonally expanded ℝ^2 space to a one-dimensional linear expanded ℝ^1 space. This reduction in the exploration space size is manifested by an increase in the correlation coefficient Corr(X) between input features, with Corr(X) varying from 0 to 1. Therefore, the relationship between exploration space and input feature correlation can be expressed as ℝ^d^1-Corr(X). Based on an iterative algorithm, the unrolling approach decomposes end-to-end optimization in a single iteration into multiple sub-problems that are iteratively processed. Through repeated feature extraction, this approach increases the correlation between node features in GNNs. For example, in the WMMSE algorithm, the k-th power v_i^(k) is generated by channel features h plus the power v_i^(k-1) from the previous iteration, which creates a certain correlation between the input features of the GNN. Moreover, in the deep unrolling architecture, two message passing operations and multiple channel information pass to increase the correlation between power v_i and channel characteristic h. This approach is more effective in feature extraction with higher correlation of features after multi-layer networks and more accurate exploration of optimization objectives, especially when faced with changing data distributions. § NUMERICAL EXPERIMENTS This section is dedicated to the conduction of comprehensive numerical tests to affirm the effectiveness and generalization of the presented knowledge-driven network architecture. Our experimental setup consists of a user count set to N=10, a noise variance of σ^2=10dB, and an interference channel following a Rayleigh distribution. We derive the channel coefficients h_ij from the complex normal distribution 𝒞𝒩(0, 1). In terms of the neural network training parameters, we utilize the Adam optimizer with a set learning rate of 0.001 and designate the batch size to be 64 samples. We compare UWGNN with established benchmarks and cutting-edge approaches. * WMMSE <cit.>: This is a classical iterative optimization algorithm for weighted sum rate maximization in interference channels. We run WMMSE for 100 iterations with p_max as the initial power setting. The results obtained from this process served as our benchmark measurements. * WCGCN <cit.>: This is an unsupervised message passing GNN that uses two MLP networks to aggregate neighbor information and update its power information, and obtain a performance much better than the WMMSE algorithm. * UWMMSE <cit.>: UWMMSE proposed a deep unrolling architecture based on GNN to learn the iterative step-size a^(k) and the weight translation parameter b^(k), reduced the times of WMMSE iterations, and attained the performance comparable with well-established benchmarks. * MLP <cit.>: MLP uses the WMMSE output as a training label to supervise and learn a function mapping between the channel state information and the corresponding resource allocation. We set UWGNN, WCGCN, and UWMMSE as three-layer networks, corresponding to three iterations, and use the same random seed to generate 10^4 sets of channel training samples to train three networks, respectively. Learning the iterative process with an MLP can be challenging and may require more training samples. Therefore, to effectively train the model, we utilized a ten-fold increase in training data in this study. §.§ Selection of UWGNN Hyperparameters In this section, our investigation centers on how model performance is influenced by factors such as the intermediate message-passing dimension, which is the output dimension of MLP_1,4, the variable dimension, which corresponds to the output dimension of MLP_2,3. We used 20 random seed variables in our experiments and chose the WMMSE algorithm output as a benchmark. As illustrated in Fig. <ref>, the effect of the message passing size on model performance was demonstrated. We observed that when the dimension of the aggregation message was too small, UWGNN performance declined. When the dimension increased to 16, UWGNN performance exceeded the WMMSE baseline. Further increases did not impact the model performance significantly. We hypothesized that the message passing dimension influences the ability of the node to extract information from neighboring nodes, as evidenced by (<ref>), and concluded that the dimension of message variables must be at least equal to the sum of the dimensions of u_j, w_j, and h_ij. Reducing message passing dimension will lead to feature information compression of neighboring nodes that decrease network performance. On the other hand, excessively wide message passing dimension only marginally enhanced model performance but generated computational complexity redundancy. For the intermediate variable dimensions, as presented in Fig. <ref>, the impact on the performance of the network model is minimal, with a slight decrease in performance observed only for intermediate variable dimensions equal to two. The intermediate variables designed in our study, based on one-dimensional messages u_i and w_i in (<ref>)-(<ref>). Increasing the dimension of these variables has little impact on one-dimensional feature extraction. Thus, the size of the message passing dimension in Fig. <ref> does not significantly affect network performance. However, in distributed GNNs, communication resources are utilized during the message passing process. An excessively wide intermediate variable dimension requires a larger message passing dimension, which can increase transmission latency and path loss, thereby affecting the accuracy and delay of GNN inference. As a result, we aim to strike a balance between network feature extraction and optimal network performance by ensuring that the feature dimension of u_i, w_i is comparable to the input dimension of h_i, v_i. As stated above, we establish the network unit size of MLP_1-MLP_5 in (<ref>)-(<ref>) to {5, 8, 16}, {19, 8, 4}, {7, 8, 4}, {10, 8, 16}, and {27, 8, 1}. UWGNN is constructed with three unrolled WMMSE layers, each of which shares parameters. In every layer, two GNNs are incorporated as learning components, utilized for the emulation of (<ref>)-(<ref>). §.§ Sum Rate Performance We compare the sum rate performance obtained by UWGNN with other approaches, as shown in Table <ref>. To determine the upper bound, we ran the WMMSE algorithm 100 times for random power initialization and selected the best performance. As shown by the results, both UWGNN and WCGCN are in close proximity to the benchmark performance when dealing with relatively smaller problem scales. As the user size increases, MLP is not an optimal choice because it requires a large number of training samples. This observation indicates that MLP is not well-suited for learning an iterative process. In contrast, UWGNN and WCGCN demonstrate attractive properties as their performance remains stable and close to the performance ceiling of WMMSE, even as the user size increases. It is worth noting that UWMMSE requires the WMMSE algorithm to obtain the output power, which significantly impacts its performance as the computational complexity increases. Our findings suggest that message passing GNNs are preferable and more effective than other approaches for the iterative optimization problems. §.§ Convergence Speed Comparison We compare the network convergence rate in Fig. <ref>, and our network, which converges in only 1/3 of the epochs to achieve convergence performance compared to that of WCGCN. UWMMSE learns the step size parameter and uses the WMMSE algorithm to iteratively obtain the output. Therefore, the initial performance of UWMMSE is better, but it presents a slower linear convergence speed. Although the deep unrolling operation increases the computational complexity of the network from O(L(|E|+|N|)) to O(L(|E|+|N|^2 )(|E|+|N|)), our network architecture is narrower in width and deeper in depth; thus the network parameters of UWGNN has a smaller parameter count than WCGCN. We employ the thop library in Python for comparing the computational load and the network's parameter count, as specified in Table <ref>. The proposed network converges faster because we have more iterations of channel state information and node power information, resulting in a smaller exploration space, and the same feature extraction approach as the WMMSE algorithm. §.§ Scalability Comparison In order to assess network scalability, both UWGNN and WCGCN were independently trained over 30 epochs until convergence was achieved in a scenario with 20 users. Subsequently, these trained networks were transferred to new scenarios featuring varying numbers of users, without the need for additional training. As shown in Fig. <ref>, both GNN networks have good convergence performance for a smaller number of 10 user scenario. However, when the number of users increases to 50, the WCGCN network can no longer exceed the performance of the WMMSE baseline, while our network can still exceed the baseline performance. As the number of users reaches 100 and the user connection density intensifies, the WCGCN can no longer achieve the baseline performance, while our network architecture still achieves the baseline performance. After our experiments, we found that the scalability of GNN comes from the smoothing operation of the max pooling layer on the features of neighboring nodes. When the user dimension changes, the pooling layer uses SUM(.), MAX(.), or MEAN(.) functions to extract the feature information of neighbor nodes and edges. Experimentally, it is found that the MAX function works best for the problem in this paper. Moreover, our network architecture undergoes two rounds of MAX pooling because of two information aggregation operations; thus, it is better suited for scaling to diverse scenarios with varying user number densities. §.§ Channel Distribution Generalization The generalization performance of the network is tested using datasets with varying channel distributions. As the user moves, the scattered channel may convert into a direct channel, leading to a potential shift in data distribution from Rayleigh to Rician distributions in the communication scenario. To ascertain the model's generalization ability, we adjusted the sample distribution of the channel in the test set. The initial training set consisted of Rayleigh channels with a mean of 0 and a variance of 1. As shown in Fig. <ref>, we modified the variance of the Rayleigh channel within the test set. During minor variance changes, UWGNN and WCGCN both exhibited generalization capabilities. Nonetheless, as the variance gap widened, WCGCN's performance deteriorated noticeably, signifying its limitations in adapting to data distribution shifts. With the integration of a knowledge-driven network grounded on the WMMSE algorithm, our network architecture demonstrated superior generalization compared to WCGCN, maintaining robust adaptability amidst diverse data distributions. Remarkably, our model sustained roughly 90% of its performance even amidst substantial variance alterations. In Fig. <ref>, we introduced the line of sight (LOS) component to the Rayleigh channel by increasing the channel mean to 1, thus transforming its distribution into a Rician distribution. Following this, we altered the variance of the Rician distribution. The experimental results underscored the continued robust generalization performance of our network. Fig. <ref> illustrates the impact of modifying the strength of the LOS component within the Rician channel. Our model maintained satisfactory performance under varied direct path strengths, facilitating a seamless transition between scattered and direct channels - a critical feature ensuring the model's generalization ability in a mobile environment. The conducted experiments affirm that the integration of a knowledge-driven model significantly bolsters generalization across disparate sample distributions. Facing diverse data distributions, our model demonstrates minimal performance degradation, obviating the need for additional retraining. §.§ Communication Topology Generalization As shown in Fig. <ref>, the topology of a communication network changes with the movement of users and the addition or removal of nodes, which affects the size of the node degree in the graph and thus impacts the performance of GNNs. To evaluate network generalization in adapting to communication topology changes, we generated a connection weight matrix as depicted in <ref>. The edges with weights lower than the probability of losing connection η_lc were removed to simulate sparse connections, allowing us to convert between fully and sparsely connected interference graph datasets. C_(j,i,:) = {0, c_ji<η_lc 1, otherwise. Â_(j,i,:) =A_(j,i,:)⊙ C_(j,i,:) where c_ji followed standard Gaussian distribution. Â_(j,i,:) is a new adjacency matrix of communication graph. We experimented with two training and testing directions: from dense to sparse, and from sparse to dense. First, we trained the network on a fully connected graph data set with 10 pairs of users, migrating the test to sparsely connected graph data. As shown in Fig. <ref>, our network degrades in performance as the communication topology becomes more sparse, but still maintains a generalization performance of over 85%. In contrast, the performance of WCGCN degrades more significantly as the topology of the communication network changes. Compared with the unweighted scenario, the performance of the neural network trained in the weighted summation rate scenario is more generalized in Fig. <ref>. This is because the weighted and rate scenarios are more complex and the network learns more data about distribution during training. So when the network topology is changed, it is more adaptable. Second, we train the neural network on a training sample with η_lc of 0.6 and gradually decrease the η_lc value on the test data, making the sparse connected graph into a fully connected graph. The results are shown in Figs. <ref> and <ref>. With increasing communication topology density, our network performance still closely follows the WMMSE algorithm performance, but the traditional end-to-end GNN performance decreases as increasing interference density. §.§ Mobile Generalizability Performance The previous experiment only changed the link structure of the communication topology, however, as wireless devices usually move, the communication channel gain changes as well. In this experiment, we distribute N-transmitters uniformly in the space of [1000m× 1000m], and the corresponding receivers are distributed around the transmitters with distances obeying uniform distribution 𝐔 (30,90). Then we let each receiver device move randomly with speed S. The change of position of the receiver obeys a two-dimensional Gaussian distribution N(0, 0, S, S, 0). The distance matrix 𝐃 can be obtained by calculating the inter-device distance. we multiply the distance matrix 𝐃(t). The channel gain matrix 𝐇(t) is adjusted proportionally based on distance to reflect the changes in device movement, resulting in the updated channel gain matrix 𝐇(t+1). If the inter-device distance is greater than 1000 m, we consider that the channel gain is small and set h_ij to 0 to simulate the process of moving the user out of the device coverage. In this way, the test sample distribution will gradually move away from the training sample distribution with the increase of move time. In Fig. <ref>, we test how the different user movement speeds affect the network generalization. When the device moves at low speed, the communication topology and channel gain change slowly, and the distribution of training and test data do not differ much. So, both GNNs can maintain more than 95% performance. However, when the speed of device movement gradually increases, and the test data changes more drastically, the end-to-end GNN has difficulty in adapting to the new distribution and the performance degrades. Due to the incorporation of algorithmic knowledge, our proposed GNN excels in its generalization capabilities, especially in dynamic environments involving user mobility. This advantage equips the GNN to seamlessly adapt to evolving communication scenarios and maintain the communication rates under varying conditions. §.§ Sample Complexity In <cit.>, the concept of network sample complexity and algorithm alignment is introduced, which shows that the higher algorithm alignment, the lower network sample complexity. To compare their network sample complexities, UWGNN and WCGCN are trained using different training samples until they attain convergence, and their performance is tested on 2000 test samples. Analysis of Fig. <ref> shows that our network performs well on small training datasets with different numbers of users, indicating that it does not depend on a large sample size to learn the statistical distribution of the data, but to learn the structure and iterative calculation of the WMMSE algorithm. Additionally, the performance improves faster with an increase in sample size, indicating that our network aligns more effectively with the algorithm than the conventional GNN network. § CONCLUSION In this paper, we have proposed a novel knowledge-driven approach based on WMMSE algorithm inspired GNN to solve the resource allocation problem in D2D networks. Compared with current approaches, our approach has exhibited unique advantages in scalability and data generalization. Moreover, we have introduced a theoretical hypothesis for the validity of the graph neural network unrolling approach. Going forward, we plan to extend this work to other wireless resource allocation problems, such as bandwidth allocation, beam assignment, etc., and further develop our theoretical results to better guide the design of unrolling approaches. We expect that neural networks with knowledge-driven architecture will be significant in the future of wireless communication networks. IEEEtran
http://arxiv.org/abs/2307.04569v1
20230710140129
Interpreting and generalizing deep learning in physics-based problems with functional linear models
[ "Amirhossein Arzani", "Lingxiao Yuan", "Pania Newell", "Bei Wang" ]
cs.LG
[ "cs.LG", "physics.flu-dyn" ]
Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation Xiangnan He Received January 1, 2015; accepted January 1, 2015 =========================================================================================== ^1Department of Mechanical Engineering, University of Utah, Salt Lake City, UT, USA ^2Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA ^3Department of Mechanical Engineering, Boston University, Boston, MA, USA. ^4School of Computing, University of Utah, Salt Lake City, UT, USA Correspondence: Amirhossein Arzani, University of Utah, Salt Lake City, UT, 84112 Email: [email protected] empty Although deep learning has achieved remarkable success in various scientific machine learning applications, its black-box nature poses concerns regarding interpretability and generalization capabilities beyond the training data. Interpretability is crucial and often desired in modeling physical systems. Moreover, acquiring extensive datasets that encompass the entire range of input features is challenging in many physics-based learning tasks, leading to increased errors when encountering out-of-distribution (OOD) data. In this work, motivated by the field of functional data analysis (FDA), we propose generalized functional linear models as an interpretable surrogate for a trained deep learning model. We demonstrate that our model could be trained either based on a trained neural network (post-hoc interpretation) or directly from training data (interpretable operator learning). A library of generalized functional linear models with different kernel functions is considered and sparse regression is used to discover an interpretable surrogate model that could be analytically presented. We present test cases in solid mechanics, fluid mechanics, and transport. Our results demonstrate that our model can achieve comparable accuracy to deep learning and can improve OOD generalization while providing more transparency and interpretability. Our study underscores the significance of interpretability in scientific machine learning and showcases the potential of functional linear models as a tool for interpreting and generalizing deep learning. Keywords: Explainable Artificial Intelligence (XAI); Scientific machine learning; Functional data analysis; Operator learning; Generalization § INTRODUCTION In recent years, deep learning has emerged as a transformative modeling approach in various science and engineering domains. Deep learning has been successfully used for improving the quality of physical data or improving physics-based models (e.g., superresolution <cit.>, denoising <cit.>, system/parameter identification <cit.>, and closure modeling <cit.>). Additionally, deep learning is a key tool in machine learning enhanced models where the goal of deep learning is to provide a surrogate for the physics-based model, which is useful in many-query and real-time predictive modeling <cit.>. While deep learning has demonstrated impressive success in most of these studies, its inherent black-box nature raises concerns regarding the interpretability of the prediction processes. In physics-based systems, where causal relationships and fundamental first-principle laws play a pivotal role in the results, interpretable models are essential for understanding the phenomena of interest and obtaining trustworthy results. Additionally, it is often desirable for deep learning to generalize and extrapolate beyond the training data once the model is deployed and being used in practice, which is a challenging task in physics-based deep learning <cit.>. The challenges associated with interpretability and generalization in machine learning and deep learning could be overcome with parsimonious and interpretable models <cit.>. In physics-based modeling, this has been achieved with various techniques such as symbolic regression <cit.>, sparse identification of nonlinear dynamics (SINDy) <cit.>, interpretable reduced-order models (ROM) <cit.>, and design of certain coordinate transformations in deep neural networks <cit.>. More broadly, the growing field of interpretable and explainable artificial intelligence (XAI) offers a set of tools aimed at making black-box deep learning models understandable and transparent to humans <cit.>. XAI approaches could be classified as “by-design” and “post-hoc” methods. The aforementioned parsimonious models are by-design where one achieves interpretability by building such features in the machine learning model from the initial design phase, which has been a more common approach in physics-based modeling and scientific machine learning. However, by-design XAI approaches usually lead to a tradeoff between model accuracy and interpretability <cit.>. On the other hand, post-hoc XAI approaches do not compromise model accuracy and instead, explain the model's results in a post-processing step. Standard off-the-shelf XAI approaches have been recently used in various fields such as healthcare <cit.>, aerospace <cit.>, turbulence modeling <cit.>, and material science <cit.>. Interpretable machine learning models also offer the opportunity to improve generalization. However, generalization to out-of-distribution (OOD) input data is a key challenge in scientific machine learning and particularly for deep learning models <cit.>. While standard techniques such as regularization could be used to achieve acceptable in-distribution generalization error (interpolation), OOD generalization (extrapolation) is usually not achieved. Extrapolation poses a serious challenge for black-box deep learning models. As an example, machine-learning based turbulence models trained from equilibrium turbulence databases have failed once applied to non-equilibrium turbulence and transitional flows <cit.>. Interestingly, in certain examples, a simple linear regression model has exhibited remarkable performance in extrapolating training data, with an average error rate merely 5% higher than that of black-box models and even surpassed black-box models in approximately 40% of the scientific machine learning prediction tasks evaluated <cit.>. Here, we propose a post-hoc deep learning interpretation strategy where we build a surrogate for a given trained neural network in the form of generalized linear integral equations. We hypothesize that the interpretable model also improves OOD generalization while providing an approximation to the neural network's predictions. Given that many deep learning tasks in scientific computing deal with mapping between functions and functionals, we leverage theories within the field of functional data analysis (FDA) <cit.>. FDA provides a theoretical framework to effectively model and analyze functional data and has been used in different applications <cit.>. Specifically, we will use functional linear models that enable one to construct analytical mapping involving functions/functionals in the form of interpretable integral equations <cit.>. In scientific machine learning, the learning tasks often involve mapping between high-dimensional data <cit.>. In these high-dimensional settings, the simplest interpretable machine learning model, multivariate linear regression, can fail and more advanced interpretable models such as functional regression have been shown to provide better results <cit.>. Unlike multivariate methods that discard spatial/temporal distribution of the data, functional methods maintain and leverage the intrinsic structure of the data, capturing the temporal or spatial relationships between data points, and therefore can provide a more accurate mapping between the data and uncover valuable insights and patterns. A key challenge in functional regression is the learning of the kernel function that appears in the integral equations. A common approach is expanding the kernel in a certain basis or using a pre-defined fixed kernel <cit.>. Kernel regression is an established statistical modeling approach <cit.> and kernel methods have been used in building nonlinear ROMs <cit.>. In this work, we propose a more flexible framework where the kernel is learned from a library of candidate kernel functions using sparse regression. Once trained on data produced by probing a neural network in a post-hoc fashion, the model will provide an analytical representation in the form of a linear sum of integral equations that not only approximates the neural network's behavior but also provides potential improvement in OOD generalization. The model could be trained based on data probed on the entire training landscape or a subset of the input parameter space to provide a global or local interpretation, respectively. Our proposed approach could also be viewed in the context of operator learning and neural operators <cit.>. Deep learning of operators has recently gained attention in learning mapping between function spaces and has been utilized in various scientific machine learning problems <cit.>. Interestingly, certain neural operators also leverage integral equations and generalized versions of functional linear models <cit.>. In scientific computing, the utilization of Green's functions/operators <cit.> has inspired the incorporation of integral equations into the architecture of deep neural operators. These integral equations enable the learning of operators by mapping between function spaces and belong to the category of functional linear models. In this paper, we present an interpretable machine learning model that builds on several fields such as operator learning, XAI, and FDA. Our paper provides the following major contributions: * We present an early application of functional linear models for post-hoc interpretation of black-box deep learning models in scientific computing. * We provide a new library based approach together with sparse regression for discovering the kernels in the functional linear models. This provides more flexibility compared to prior FDA studies with pre-defined kernels. * The majority of post-hoc XAI approaches used in scientific machine learning are local and explain neural network's predictions in a region local to a desired input. Our proposed approach is a global surrogate model that could also be easily adapted to local interpretation tasks. * We demonstrate that our proposed functional linear model could be trained either on the data itself or by probing a trained neural network. This allows the model to be utilized either as an interpretable operator learning model or as a black-box interpreter. We document training and OOD testing performance in solid mechanics, fluid mechanics, and transport test cases. The rest of this paper is organized as follows. First, in Sec. <ref>, we provide a brief theoretical overview of different approaches such as FDA to motivate the use of integral equations as a surrogate for deep learning. Next, we present our proposed functional linear model (Sec. <ref>) and explain how it is applied for interpretation and OOD generalization in Sec. <ref>. In Sec. <ref>, we present our results for different scientific machine learning test cases. The results and our framework is discussed in Sec. <ref>, and we summarize our conclusions in Sec. <ref>. § METHODS §.§ Theoretical motivation and background Integral equations provide a mathematical framework that encourages the development of interpretable models by explicitly defining the relationships between variables. Our proposed interpretable surrogate model for understanding a deep learning operator is built upon integral equations. These integral equations yield an interpretable generalized linear model that approximates the predictions of the neural network. We provide a brief review of several topics in applied mathematics and machine learning to motivate the idea of using integral equations to build a surrogate for an available deep learning model. §.§.§ Green's functions In many physics-based learning tasks, we are interested in solving partial differential equations. Consider the differential equation L 𝐮 = 𝐟(), where one is interested in solving 𝐮, for different input source terms 𝐟(). Similar to how a linear system of equations 𝐀=𝐛 could be solved as = 𝐀^-1𝐛 using an inverse operator 𝐀^-1, the above differential equation could also be inverted assuming L is a linear operator 𝐮() = L^-1𝐟 = ∫𝐠(,ξ) 𝐟(ξ) dξ , where 𝐠(,ξ) is the Green's function corresponding to the linear operator L and the action of 𝐠(,ξ) on 𝐟 that produces the solution is the Green's operator. Therefore, at least for linear operators one can find an analytical operator representation in the form of an integral equation to map the given input 𝐟 to the output 𝐮. When dealing with a nonlinear operator, it is possible to employ a similar concept to find a linear approximation of the operator, at least within a local context. This motivates extending Green's function concept to a generalized linear integral model that can approximate desired physics-based operator learning problems. Given the existing knowledge about Green functions for linear differential equations <cit.>, we can design the integral equations based on the physical problem we are trying to solve. §.§.§ Convolutional neural networks (CNN) Convolutional neural networks (CNN) are arguably one of the most successful deep learning architectures and are widely used in computer vision <cit.> and mapping 2D image-like field variables in scientific machine learning <cit.>. A key reason behind CNN's success is the fact that each layer is only connected to a local spatial region in the previous layer. This is achieved using convolutional operators that enable CNN to learn hierarchical features. We can write a convolutional integral operation as 𝐮(x,y) = ∫𝐊(ζ, η ) 𝐟(x-ζ,y-η) d ζ dη= ∫𝐊(x - ζ, y - η ) 𝐟( ζ, η) d ζ dη , where the output 𝐮 is generated by convolving the input 𝐟. In CNN, the above operation is done in a discrete manner and the kernel 𝐊 represents the learnable parameters of the network. Although convolution in a CNN involves a more complex process of sliding filters across the input and is accompanied by additional operations in different layers, the fundamental idea of a convolutional integral equation that maps inputs to outputs through convolutions inspires the development of integral equation models. Such models can construct interpretable surrogates for CNNs and other deep learning architectures. Interestingly, these convolution layers perform feature learning that once combined with fully connected layers allow CNN to make predictions. Our proposed approach aligns closely with this strategy. Similarly, we leverage a library of integral functions to facilitate feature learning and prediction is made through linear regression. In CNN, the first version of the above equation involving 𝐟(x-ζ,y-η) is used. However, in building our interpretable model, we will use the equivalent version involving 𝐟( ζ, η) (second form in Eq. <ref>). §.§.§ Radial basis function (RBF) networks Radial basis function (RBF) networks are a neural network generalization of kernel regression or classification <cit.>. RBF networks use radial basis functions as their activation function. For a single hidden layer, the output of an RBF network could be written as 𝐮(𝐱) = ∑_i=1^m w_i exp(- 𝐱- μ_i ^2 / 2 β_i^2 ) , where m different hidden units with different prototype vector μ_i and bandwidth β_i are used with 𝐱 as an input. The weights of the network w_i are optimized to find the final solution. Each RBF influences a set of points in the vicinity of its feature vector μ_i with the distance of influence dictated by the bandwidth β_i. RBF networks are universal function approximators. In our library of integral equations for our surrogate model below, we will also leverage RBFs but in the integral form. That is, the feature vector μ will be replaced with a continuous variable and the integration will be done with respect to this variable. §.§.§ Gaussian process regression (GPR) In Gaussian process regression (GPR), a function is approximated using Gaussian processes, which are specified by a mean function and a covariance function (a kernel) <cit.>. The squared exponential kernel also used in RBF (Eq. <ref>) is a popular choice in GPR. GPR effectively integrates information from nearby points through its kernel function, similar to how we will build our interpretable model below. An intriguing observation is that as the number of neurons in a single hidden layer of a neural network approaches infinity, it evolves into a global function approximator. Similarly, under certain constructs, a neural network with a single hidden layer for a stochastic process converges towards a Gaussian process when the hidden layer contains an infinitely large number of neurons <cit.>. §.§.§ Neural operators Neural operators are an extension of neural networks that enable learning of mapping between infinite-dimensional function spaces <cit.>. Traditional neural networks also learn a mapping between functions (as used in our test cases below) but they require a fixed discretization of the function, whereas neural operators are discretization-invariant. In neural operators, typically, each layer is a linear operator (e.g., an integral equation) and nonlinear activation functions are used to increase the expressive power. The input 𝐯 to each layer is first passed through an integral linear operator ∫𝐊(, ξ) 𝐯(ξ) d ξ using a pre-defined kernel 𝐊 and subsequently a nonlinear activation is applied. Therefore, neural operators also leverage integral equations in their regression tasks but build on neural network architectures for increased expressive power at the price of reduced interpretability. Different designs of the kernel lead to different neural operators. Fourier neural operators (FNO) are a popular and successful example that leverages Fourier transforms and convolutions <cit.>. Graph neural operators <cit.> is another example that uses integral equations similar to the approach we will employ in our model. These operators leverage Monte Carlo sampling techniques to approximate the integral equations. §.§.§ Functional data analysis (FDA) FDA is a mathematical framework that focuses on analyzing data in the form of smooth functions, rather than discrete observations <cit.>. We will be presenting our proposed framework within the context of FDA and therefore more information is provided here. In FDA, the dependent variable, independent variable, or both are functionals. Broadly speaking, we may use FDA to perform mapping and regression when functions are involved either as input or output. Let's consider a mapping between an input functions 𝐟() and output 𝐮, where the output is either a function (scalar/vector field) or a single scalar/vector. In the simplest case mimicking classical regression, for a function output, one might write the output concurrently as 𝐮() = α() + ψ()𝐟(), where α and ψ are bias and regression coefficient functions, respectively. However, this simple concurrent formulation does not consider the potential influence of neighboring points on the solution. Integral equations could be used to overcome this issue and provide a more realistic scenario. We can formulate the regression problem using functional linear models <cit.>. Assuming that all data are mean-centered, a fully functional model is applied to the case where the input and output are both functions 𝐮() = ∫ψ(, ξ) 𝐟(ξ) d ξ , in which the goal is to find ψ. In a separate problem, when the output is a single scalar/vector value, the problem can be formulated as a scalar/vector response model 𝐮 = ∫ψ(ξ) 𝐟(ξ) d ξ . Finally, if the output is a function and the input is a single scalar/vector value the problem can be written as a functional response model 𝐮() = ψ() 𝐟 . In this paper, we will only study the first two cases (Eq. <ref> and <ref>). §.§ Interpretable functional linear models The discussion above highlights the importance of integral equations in learning mappings between function spaces. Although the various methods mentioned earlier may have similarities and can be considered equivalent in certain conditions, our primary focus will be on FDA with functional linear models. To enhance the expressive capacity of functional linear models, we will expand their capabilities in three distinct ways: * First, we will lift the input functions into a higher-dimensional feature space using a pre-specified lifting map 𝒯 (e.g., polynomials) and then define functional linear models for each component of the new feature space separately and use linear superposition to define the final model. Such lifting operations have been successfully used in scientific machine learning models (e.g., <cit.>). * We will use generalized functional linear models <cit.>. Specifically, we will allow a nonlinear function g(.) to be applied to the functional linear models to create outputs such as 𝐮() = g ( ∫ψ(, ξ) 𝐟(ξ) d ξ). * Model selection (choice of the kernel) and tuning its hyperparameters is a difficult task in various forms of kernel regression <cit.>. Instead of pre-specifying the kernels ψ, we will pre-define a library of kernels and associated hyperparameters. Subsequently, we will use sparse regression to select among the library of candidate functions. By specifying the desired level of sparsity, a balance can be achieved between interpretability and accuracy. In the examples explored in this work, we investigate deep learning tasks and corresponding interpretable functional linear models where the input is a 2D function (image) defined on Ω and the output is either a single scalar value, a 1D function (line), or a 2D function (image). These models can be considered as mappings: 𝐟(x,y) →𝐮, 𝐟(x,y) →𝐮(x), and 𝐟(x,y) →𝐮(x,y), respectively. Incorporating the above three modifications to functional linear models and using convolution-like operators for the tasks involving image or line outputs, we write the final models in the most general form as 𝐮(x,y) = ∑_n=1^N∑_m=1^M∑_ℓ=1^L w_n,m,ℓ g_n ( ∫_Ωψ_m(x-ζ, y-η ) 𝒯_ℓ𝐟(ζ,η) d ζ dη) , 𝐮(x) = ∑_n=1^N∑_m=1^M∑_ℓ=1^L w_n,m,ℓ g_n ( ∫_Ωψ_m(x-ζ, η ) 𝒯_ℓ𝐟(ζ,η) d ζ dη) , 𝐮 = ∑_n=1^N∑_m=1^M∑_ℓ=1^L w_n,m,ℓ g_n ( ∫_Ωψ_m(x, y ) 𝒯_ℓ𝐟(x,y) d x d y ) , where a linear combination of L different lifting operations 𝒯 on the inputs, M different kernels ψ, and N different nonlinear functions g are used in writing the final solution. This could be considered as a generalized version of an additive functional regression <cit.>. The goal is to formulate a linear regression problem based on the above analytical equations and training data to find the unknown coefficients w_n,m,ℓ. We do not impose any constraint on the kernel ψ besides being L^2, and therefore inducing Hilbert-Schmidt operators. Below we present a few remarks. * The above models are analytically tractable (interpretable), particularly for small L, M, and N. Sparsity promoting regression will be used in this study to eliminate many of the weights w_n,m,ℓ in a data-driven fashion and improve the interpretability of the final model. The remaining non-zero weights represent a reduced-order representation of the system, which behaves linearly with respect to its parameters w_n,m,ℓ. * In practice, it is not necessary to consider all possible combinations of lifting, kernels, and nonlinearity in the library employed for sparse regression. The library could be defined in a flexible fashion as an arbitrary combination of these operators and the final solution will be a linear superposition of the selected terms in the library. * The kernels ψ provide an interpretation for each term in the model. ψ(x-ζ, y-η) in Eq. <ref> represents the effect of input function 𝐟 at point (ζ,η) on the output function 𝐮 at point (x,y). ψ(x, y) in Eq. <ref> represents a weight for the influence of the input function 𝐟's value at point (x,y) on the output 𝐮 and creates a weighted average. * Most kernels used are equipped with a bandwidth that also needs to be estimated and represents a characteristic problem-dependent length scale and smoothing parameter. Therefore, in our library of candidate terms, for each such kernel, we also consider several candidate bandwidths and treat each kernel separately. Therefore, M in the above equations is typically a large value. For instance, if three different analytical expressions are proposed for the kernels ψ with 20 different potential bandwidths each, then M=60. * To enable approximation of the integrals during training, the above integrals are replaced with discrete sums that approximate the integrals. Therefore, the above models could be compared to a graph neural operator with a single hidden layer <cit.>. However, in our model, various kernels are added linearly in parallel to form the final solution in an analytically simple manner, whereas in neural operators the kernels are added sequentially in different hidden layers, which reduces the interpretability. Additionally, as discussed below, we provide a library approach for kernel selection. * In this work, we only study regression tasks. The proposed approach could be extended to classification tasks with appropriate selection of the nonlinear function g <cit.>, similar to activation function selection in deep learning. To find the coefficients w_n,m,ℓ, a linear regression problem is formulated based on the above integral equation models. Let's assume a set of Q training data pairs (𝐟 and 𝐮) is available and sampled over a set of collocation points x_i and y_j (i=1,…,I, j=1,…,J) defined on a 2D grid (a total of N' = I × J points). The input image 𝐟(x_i,y_j) is mapped to 𝐮(x_i,y_j), 𝐮(x_i), or 𝐮 based on the task. Additionally, let's assume a total of P terms is arbitrarily selected among the L× M × N candidate terms for the library of integral equations. The above integral equations could be numerically evaluated using any numerical integration technique for each of the collocation points. This will result in a system of linear equations in the form 𝐔 = 𝐅𝐖, where 𝐔 is a (QN' ) × 1 column vector of outputs, 𝐅 is a (QN' ) × P regression matrix formed based on evaluating the integrals, and 𝐖 is a P × 1 column vector that contains the unknown coefficients for each integral equation. Sparse regression is used to find the solution by solving the following convex optimization problem min_𝐖‖𝐔 - 𝐅𝐖‖_2 + λ‖𝐖‖_1 , where λ is a sparsity promoting regularization parameter. This optimization problem is solved using a sequential thresholded least-squares algorithm <cit.> to find 𝐖. Increasing λ will reduce the number of active terms in the final integral equation model (improved interpretability) but can reduce the accuracy. Our proposed framework resembles sparse identification of nonlinear dynamics (SINDy) where a similar optimization problem together with a library of candidate terms is used for interpretable data-driven modeling of dynamical systems <cit.>. λ=0.1 was used for all cases unless noted otherwise. In the Appendix (Sec. <ref>), we present an alternative strategy for solving this linear regression problem by presenting the normal equations for functional linear models. The library of candidate terms for each task and test case (defined in the Results Section) is listed in Table <ref>. The range and number of bandwidths β used for each case are also listed. In the more complex tasks, a large number of candidate bandwidths should be selected. Additionally, some of the candidate integral terms were defined based on a truncated domain of integration (local influence), which is a common practice in related methods <cit.>. §.§ Interpreting and generalizing deep learning with an interpretable surrogate Our proposed framework provides an interpretable approach for learning operators and mapping between functions. The entire model is simply a linear combination of integral equations (listed in Table <ref>). The model is trained by assuming a library of candidate integral equations and solving the convex optimization problem in Eq. <ref>, which allows for the determination of coefficients associated with each integral. Subsequently, given any new input function 𝐟(x,y) one could evaluate the integral equations to find the solution 𝐮. The input function's definition is flexible and could be defined either analytically or numerically on an arbitrary grid. A schematic overview is shown in Fig. <ref>. In this manuscript, we demonstrate three application areas for our proposed interpretable model: * Interpreting a trained neural network. Given a trained neural network for mapping between function spaces, we will probe the network using a desired range of the input function to generate pairs of inputs and outputs. Subsequently, the input and output data will be used to build our interpretable surrogate model, which provides an analytical equation that approximates the behavior of the neural network. The neural network could be probed within the entire range of its training landscape or locally to better understand its behavior in a localized landscape (a specific range of training data). Finally, the network could be probed with out-of-distribution input data to understand the network's behavior outside of its training landscape. It should be noted that the network does not necessarily need to be probed with the exact data that the network used for training. * Generalizing a trained neural network. The surrogate model built based on the data from the probed neural network could also be used to improve out-of-distribution generalization. Namely, the simpler and interpretable model is expected to perform better in extrapolation and generalization. Therefore, one could envision a hybrid model where the neural network is utilized to generate the output when the input data falls within the training landscape. On the contrary, when the input data lies outside of the training landscape, the interpretable surrogate model would be invoked. Of course, this will require one to first determine the boundary of the training landscape, which might not be trivial in some problems <cit.>. * An interpretable machine learning model. The interpretable model could be trained directly based on training data to build an interpretable machine learning model in the form of a linear sum of integral equations. § RESULTS First, we will present a simple 1D example to motivate the importance of interpretable machine learning models in the context of generalization. Let's consider the 1D function u(x) = 4x sin(11x) + 3cos(2x)sin(5x). The goal is to learn this function given (x,u) training data. We use 120 training points in the range -0.2<x<0.5, which is considered to be the training region. We are interested in observing how the trained machine learning model performs within the range -1<x<1, which will require generalization to out-of-distribution inputs. A fully connected neural network with three hidden layers and 100 neurons per layer and a Gaussian process regression (GPR) model, which is more interpretable than the neural network are used for training. The results are shown in Fig. <ref>. It can be seen that both models perform well within the training region. However, the black-box neural network model has worse performance outside of the training region compared to GPR. For mild extrapolation outside the training region, the GPR model has relatively good performance compared to the neural network. In the following subsections, we will present different examples to test our proposed interpretable model. In each test case, we will quantify the training error and test error. Throughout the manuscript, by test we imply out-of-distribution test. Errors are quantified for the neural network (NN) model, the interpretable model trained based on the probed trained neural network (Interp NN-driven), and the interpretable model trained based on training data (Interp data-driven). The mean and maximum errors for each case are listed in Table <ref> and <ref>, respectively. The quantified errors are based on point-wise errors quantified from the data aggregated from all pairs of data. In cases below, the input data is a 2D scalar field (image) sampled with a 28×28 resolution. In all cases with the exception of case 1 both input and output fields are normalized. In all examples (except test case 6), the same input training data used in training the neural network was employed for probing the neural network in the NN-driven interpretable model. §.§ Test case 1: predicting strain energy from a heterogeneous material The Mechanical MNIST–Distribution Shift Dataset <cit.> consists of finite element simulation data of a heterogeneous material. As shown in Fig. <ref>a, the elastic modulus distribution of the heterogeneous material is mapped from the bitmap images of the MNIST and EMNIST datasets <cit.>. The elastic modulus values E of the image bitmaps have non-zero values, and lie within a pre-defined range that depends on the distribution. Pixel bitmaps are transformed into a map of elastic moduli by transforming the pixel value b of the bitmap images through the equation E= b/255.0*(s-1)+1. In the Mechanical MNIST–Distribution Shift dataset selected <cit.>, the value s is set to 100 for training data and 25 for testing data. In the Distribution Shift EMNIST dataset, the value s is set to 100 for training data and 10 for testing data. In both cases, equibiaxial extension was applied to the heterogeneous materials through a fixed displacement d=7.0 at all boundaries. In both cases, the training data was randomly split into 80% training and 20% validation. A neural network was used to predict the change of strain energy in the material after the extension. The network consists of five fully connected layers with ReLu activation function. The training data was input as one single batch and the model was trained at a learning rate 0.001 for 50001 epochs. The absolute error distribution is shown in boxplots in Fig. <ref>. Interpretable models improve the test error and the interpretable model trained directly on data has better generalization performance. As also shown in Table <ref> and <ref>, the two different interpretable model strategies exhibit comparable performance on the training data, and their distinction becomes more apparent during testing. Another notable observation is that, in the case of EMNIST data, the data-driven interpretable model exhibits superior performance in training compared to the neural network model and exhibits lower mean and maximum training errors. However, the improvement is much smaller when considering the improvement in generalization error. §.§ Test case 2: predicting maximum velocity from a heterogeneous porous medium In this case, we considered porous media flow in a 2D square domain [0,1] × [0,1] governed by the steady Darcy-Brinkman equation αμ/k = -∇ p + ∇^2 , ∇· = 0 , where μ=10 and a heterogeneous permeability of k(x,y) =0.1exp(Ax) + 1 was used. Free-slip boundary condition (BC) was imposed at the top and bottom walls (Fig. <ref>a) and the flow was driven by a pressure gradient (p=1 and p=0 on the left and right sides, respectively). The porous domain was switched on using the α parameter set to α=1 when √((x-0.5)^2 + (y-Y)^2 )≤ R and α =0 otherwise as shown in Fig. <ref>a. Training data was generated by varying A, Y, and R within 0 ≤ A ≤ 2, -0.1 ≤ Y ≤ 0.15, and 0.09 ≤ R ≤ 0.16. The goal of the deep learning model was to predict maximum velocity given α k(x,y) as the input function. A total of 2250 2D simulations were performed using the open-source finite-element method solver FEniCS <cit.> using ∼70k triangular elements. The data were randomly split into 90% training and 10% validation. Out-of-distribution test data was also generated by running 100 simulations within 0 ≤ A ≤ 2, 0.2 ≤ Y ≤ 0.3, and 0.1225 ≤ R ≤ 0.2025 (note that Y is completely outside the previous range). A convolutional neural network with three layers of convolution (5×5 kernel, 6,16,32 channels, and maxpooling) was used followed by three hidden fully connected layers to map the input 2D function into a single scalar value. 2000 epochs with a learning rate of 5× 10^-4 and a batchsize of 64 were used. In this example, the L1 regularized formulation (Eq. <ref>) did not produce good test results compared to the neural network, and therefore an L2 regularization was used (presented in the Appendix, Sec. <ref>). λ = 10^-9 was the L2 regularization parameter and the preconditioned conjugate gradients method was used for solving the normal equations. The absolute error distribution is shown in boxplots in Fig. <ref>b. In this case, as expected the neural network had a better training error compared to the interpretable models. However, the interpretable models significantly reduce the test error. In this case, the NN-driven and data-driven interpretable models had similar performance in training and testing, which is likely due to the very good neural network training error. §.§ Test case 3: predicting velocity magnitude field from a heterogeneous porous medium The same boundary conditions and setup as test case 2 is considered again (without the Brinkman diffusion term). In this test case, more complex permeability patterns are considered and the goal is to predict the 2D velocity magnitude field (image to image mapping). The input permeability field is defined as k(x,y) = exp (-4Ax) |sin(2π x)cos(2π B y) | + 1, and 0 ≤ A ≤ 1, 0 ≤ B ≤ 4 were used in generating 225 simulations used for training. The data were randomly split into 80% training and 20% validation. The goal was to predict velocity magnitude field (x,y) given k(x,y) as the input function. Out-of-distribution test data were also generated by running 64 simulations within 1 ≤ A ≤ 2 and 4.2 ≤ B ≤ 6. In this case, a fully-connected deep autoencoder was used. The encoder mapped the input 28× 28 field to a latent size of 32 through 4 layers, which was subsequently mapped back to another 28× 28 field by the decoder with a similar structure as the encoder. 2000 epochs with a learning rate of 5× 10^-4 and a batchsize of 64 were used. The results are shown in Fig. <ref>. The contour plots and the error boxplot show that the neural network makes a better qualitative and quantitative prediction within the training regime. However, similar to the last test cases, the interpretable models have better generalization performance as shown in the boxplot (Fig. <ref>b) and Table <ref> and <ref>. §.§ Test case 4: predicting high-fidelity velocity field from low-fidelity velocity field An idealized 2D constricted vessel mimicking blood flow in a stenosed artery was considered similar to our prior work <cit.> as shown in Fig. <ref>. Steady incompressible Navier-Stokes equations were solved for a Newtonian fluid in FEniCS. A parabolic velocity profile was imposed at the inlet and no-slip BC was used at the walls. Training data were generated by performing 400 computational fluid dynamics simulations with different flow rates corresponding to different Reynolds numbers (defined based on average velocity at the inlet) between 15 and 225. In the high-resolution finite element simulations, quadratic and linear shape functions were used for velocity and pressure, respectively (P2-P1 elements) with 41.4k triangular elements. Similarly, low-resolution (low-fidelity) simulations were performed by increasing the viscosity by 20% (representing a dissipative solution with artificial diffusion) and using first order velocity elements (P1-P1 elements) with a total of 536 elements. The goal of the machine learning models is to predict the high-fidelity velocity magnitude field _hres (x,y) from the low-fidelity field _lres (x,y). We focus on a specific region of interest downstream of the stenosis as shown in Fig. <ref>b. Superresolution with machine learning is an active area of research in fluid mechanics <cit.>, and additionally, prior machine learning models have dealt with mapping between multi-fidelity data <cit.>. In our example, both datasets are first interpolated to a structured 28×28 grid. 100 out-of-distribution high-resolution and low-resolution simulations were also performed by varying the Reynolds number between 240 and 300. The neural network architecture was a deep autoencoder similar to test case 3 but with one additional encoder and decoder hidden layer. The training data were randomly split into 80% training and 20% validation. 5000 epochs with a learning rate of 2.5× 10^-5 and a batchsize of 64 were used. Finally, in this test case, instead of using a broad range for the candidate bandwidths in the interpretable model (Table <ref>), we select a focused range estimated based on existing plug-in methods for optimal bandwidth selection. Namely, β_opt = 𝒪( n^-0.3) has been proposed as an optimal bandwidth for Gaussian kernels <cit.>. Considering n=28 as the number of points in each direction, β_opt≈ 0.37. Therefore, we focused on 0.2<β<0.4 in constructing our library (Table <ref>). We verified that this range gave optimal training errors compared to other choices. It should be noted that the problem of optimal bandwidth selection is complicated <cit.>, particularly for our problem where different kinds of kernels and generalized linear models are used. The contour plots and the error boxplots are shown in Fig. <ref>. The neural network produces very accurate training results indistinguishable from the ground-truth. The interpretable model results also mimic the key quantitative and qualitative patterns with minor distinctions visible. However, in this test case, the interpretable models could not improve the out-of-distribution test error compared to the neural network, but similar to other examples it provided an approximation to the neural network behavior in the training regime. §.§ Test case 5: predicting high-fidelity wall shear stress field from low-fidelity velocity data away from the wall In this example, we reconsider the exact same dataset in the constricted artery model of the previous test case. The goal of the machine learning model here is to take the low-fidelity velocity magnitude field in the same region of interest (away from the wall) and predict high-fidelity wall shear stress (WSS) at the bottom wall as shown in Fig. <ref>. In this case, the machine learning model needs to map a 2D scalar field to a 1D scalar field. A deep autoencoder similar to test case 3 was used with the last encoder layer being mapped to a 100 × 1 line instead of an image. 5000 epochs with a learning rate of 2.5× 10^-5 and a 64 batchsize were used. As shown in Fig. <ref>, all methods provide a very accurate estimate for WSS in the training regime. In this case, the distinction between the training and test errors was more pronounced for both neural network and interpretable models. As seen more clearly in Table <ref> and <ref>, in testing, the mean absolute error was considerably reduced for the interpretable models. However, in a relative sense, the peak error during testing was not reduced as much as in some of the previous cases. Another interesting observation was that the data-driven interpretable model also had better training performance compared to the neural network model. §.§ Test case 6: local explanation of neural network predictions in a porous media flow example In all of the previous test cases, we used the exact same data used in training the neural network to train the proposed interpretable models. However, this is not required for the NN-driven Interp model. Namely, the trained neural network could be probed for any desired input to generate pairs of input-output data for training the NN-driven Interp model. In the case where one is interested in explaining the neural network behavior within the training regime, the NN-driven Interp model will be trained with a combination of training and in-distribution test data. In this last test case, we consider the porous media flow in test case 2. We reconsider the problem where the goal is to predict the velocity magnitude (instead of maximum velocity) from the input modified permeability field as shown in Fig. <ref>. The same dataset used in test case 2 is used for training the neural network. An autoencoder mapped the input 28× 28 field to a latent size of 8 through 4 layers, which was subsequently mapped back to another image by a similar decoder. 2000 epochs with a learning rate of 5× 10^-4 and a batchsize of 64 were used. The neural network was trained on the entire dataset explained in test case 2. However, the goal here was to interpret the neural network predictions locally. The position of the porous region was fixed at R=0.02 and Y=-0.1. The trained network was probed for 100 different A values (permeabilities) ranging between 0 ≤ A ≤ 2. This represented a local probing of the neural network with a higher sampling rate than what was used for its training. Finite element simulations were also performed for error quantification. The results are shown in Fig. <ref>. A data-driven Interp model was also trained based on the ground-truth data for comparison. The NN-driven Interp model produced very accurate results and could faithfully explain the neural network behavior in this localized region of the training landscape. An interesting observation is that the NN-driven Interp model slightly improves the training error compared to the neural network model and produces slightly smoother qualitative patterns. The data-driven Interp model produces significantly more accurate results compared to the neural network model. This should not be surprising because in this case the data-driven Interp model was trained based on the ground-truth data in a localized parameter space, whereas the neural network was trained over a larger parameter space. Test errors are not shown in Fig. <ref>b as in this case the Interp models were not trained based on the entire data. Instead, the errors in interpretable model predictions with respect to the neural network predictions are shown. As expected, the NN-driven Interp case matches the NN behavior more closely compared to the data-driven Interp case. The difference between the two interpretable models was less in most previous test cases where global interpretation instead of local interpretation was done. § DISCUSSION In this study, we proposed an interpretable surrogate model that approximates neural network's predictions locally or globally. The interpretable model was in the form of integral equations inspired by functional linear models. We applied our framework to different deep learning models trained on making predictions based on functions and functionals in different physics-based problems. The results demonstrated that in most test cases the interpretable model improved generalization error and even in some cases training error was improved compared to the neural network. Our proposed approach for improving generalization error could be compared to the process of human thinking. When we are asked questions that are outside our knowledge domain we probe the existing knowledge in our brain and we generate an answer to the new questions by using interpretation and reasoning. The proposed NN-driven interpretable model could be perceived within this context where we probe the neural network (our existing knowledge) to build an interpretable model to answer an unknown question (an OOD input). A surprising observation was the improved training error in the interpretable model compared to the deep learning model in some cases. In test case 1 (EMNIST), the mean and peak training errors were reduced by NN-driven and data-driven interpretable models, and in test case 5 the data-driven interpretable model reduced both mean and peak training errors. Also, in some other cases (e.g., test case 3), the maximum training error was reduced. Training error improvement by the NN-driven interpretable model observed in certain cases was a particularly unprecedented result that could be attributed to the smoothing effect in functional linear models, which has been well studied in the context of kernel smoothing <cit.>. It should be noted that in evaluating the training errors, all of the training data that were randomly split into training and validation were used. Except for test case 4, the interpretable models consistently exhibited reduced test error across all cases. This suggests that interpretable models have the potential to enhance predictive accuracy and generalize well to unseen data, showcasing their effectiveness in improving model performance. A notable characteristic of our proposed framework is its inherent flexibility. Our interpretable model could be built either based on the neural network predictions (NN-driven) or the training data without the need for a neural network (data-driven). The former is preferred when an interpretation of a black-box neural network model is desired, while the latter is preferred where improved accuracy (particularly improved OOD generalization) is desired. Our framework also shares many of the advantages offered by other operator learning models. For instance, similar to neural operators our framework once trained could be used to evaluate the solution at any desired input location, rather than being restricted to fixed locations as in traditional neural networks <cit.>. It has been shown in prior operator learning work with DeepONets that a small amount of data can improve their generalization error <cit.>. It has also been demonstrated that sparsity promoting neural network architectures can have good performance with small training data <cit.>. Our proposed interpretable model promotes a sparse solution to the operator learning problem, and therefore even just a small amount of OOD training data is expected to even further improve its OOD generalization, which should be investigated in future work. In related work, deep learning has been used to discover extensions of Green's functions beyond linear operators <cit.>. It is known that approximating Green's functions with neural networks is easier than approximating the action of Green's function on the input (Green's operator) <cit.>. This is consistent with our framework where we learn kernel functions in our integral equations. Another analogy could be made with Koopman operators, which provide a theoretical framework for linearizing dynamical systems <cit.> and have been approximated with black-box neural networks <cit.>. Dynamic mode decomposition (DMD) is an interpretable numerical approximation of the Koopman operator. DMD's interpretability is improved by retaining fewer modes or using sparsity promoting approaches <cit.>. This is similar to our framework where an interpretable model is selected in the form of generalized functional linear models to approximate an unknown operator. Additionally, the tradeoff between accuracy and interpretability is similar where reducing the number of modes in DMD (or the number of integral equations in our framework) increases interpretability at the cost of potentially reduced accuracy. The utilization of a library of candidate models has been leveraged in other scientific machine learning problems. Sparse identification of nonlinear dynamics (SINDy) models a nonlinear dynamical system by constructing analytical equations in the form of a nonlinear system of ordinary differential equations, where the terms in the equations are selected from a pre-specified library <cit.>. As another example, a library of hyperelastic constitutive equations has been used for discovering constitutive models in nonlinear solid mechanics problems <cit.>. Machine learning ROMs have been proposed where a library of proper orthogonal decomposition (POD) modes are used for parameter identification from low-resolution measurement data <cit.>. Another analogy can be drawn with ensemble machine learning models. Neural additive models use an ensemble of parallel neural networks and make final predictions with linear superposition <cit.>. Similarly, our approach could be perceived as an ensemble of approximations to the solution (each integral equation) that is linearly added to build the final solution. Our proposed framework offers the flexibility to be extended to other deep learning tasks. For instance, in certain tasks in addition to a field variable, some physical parameters might also be inputs to the neural network. As an example of an extension to such cases, the scalar response model (Eq. <ref>) could be extended as 𝐮 = r(z) ∫ψ(ξ) 𝐟(ξ) d ξ + γ z similar to the work in <cit.> where z is the additional input parameter, and r and γ are an unknown function and parameter, respectively, that need to be estimated. Leveraging analytical integral equation models in classical physics is another possible extension. An example of analytical integral equations used in fluid dynamics is the Biot-Savart Law used in modeling vortex dynamics <cit.>. This has recently inspired the neural vortex methods, which use neural networks to map vorticity to velocity <cit.>. Our analytical integral equation approach also offers the possibility of solving inverse problems using standard approaches used in solving integral equations <cit.>. Integral equations have been utilized in developing mathematical theories for inverse problems and their numerical solution <cit.>. Another interesting future direction is the comparison of our method's generalization with other operator learning methods such as DeepONets <cit.> and Fourier neural operators <cit.>. Extension to time-dependent problems is another future direction, which is inspired by parabolic Green's functions <cit.>. § CONCLUSION We have proposed an interpretable surrogate model to not only interpret a given neural network but also improve generalization and extrapolation. Our results demonstrate very good and comparable training error and in most cases improved OOD generalization error once compared to the neural network. In a broader sense, our framework suggests the notion of a hybrid machine learning strategy where a trained deep learning model is used for in-distribution predictions and an interpretable surrogate is utilized for OOD predictions. This hybrid strategy could be compared with hybrid finite-element and neural network strategies recently proposed to improve neural network predictions <cit.>. Our study suggests that by leveraging integral equations in the form of generalized functional linear models, we can build more interpretable and explainable scientific machine learning models with a high potential for improved generalization. § ACKNOWLEDGEMENT This work was supported by NSF Award No. 2247173 from NSF's Office of Advanced Cyberinfrastructure. We would like to thank Dr. Emma Lejeune and Dr. Harold Park for discussions related to this work and assistance in using the MNIST/EMNIST datasets. § DATA AVAILABILITY The codes and data used to generate the results in the manuscript will be made publicly available after peer-review. § APPENDIX §.§ Normal equations for functional linear models Here, we present an alternative strategy for finding the kernels in functional linear models using the normal equations, based on the presentation in <cit.>. Let's consider the fully functional model, which was used for image to image mapping in this study (Eq. <ref>) in the scalar form 𝐮() = ∫ψ( ξ, ) f(ξ) d ξ , where given Q pairs of training data, we have grouped them as column vectors u() = [ u_1() , …, u_Q() ]^T and f(ξ) = [ f_1(ξ) , …, f_Q(ξ) ]^T. We expand the unknown kernel function in Eq. <ref> using pre-defined arbitrary bases as ψ( ξ, ) = ∑_i∑_j b_ijω_i(ξ) θ_j() , where ω_i and θ_j are the basses and b_ij are the unknown coefficients that could be grouped into a matrix 𝐁 = [ b_ij]. Our goal is to solve the following least squares problem min_ψ∑_n=1^Q u_n () - ∫ψ( ξ, ) f_n(ξ) d ξ ^2 . Grouping the bases into column vectors ω (ξ) = [ω_1(ξ) , …]^T and θ() = [θ_1() , …]^T, we can rewrite Eq. <ref> in matrix form as 𝐮 () = 𝐙𝐁θ() , where 𝐙 = ∫𝐟(ξ) ω^T(ξ) d ξ. Finally, by defining the matrix 𝐉 = ∫θ() θ^T() d, we can derive the final form of the normal equations 𝐙^T 𝐙𝐁𝐉 = 𝐙^T ∫ u() θ^T() d , where we need to solve for 𝐁. We can also write a similar version of the above equation by reconsidering the optimization problem in Eq. <ref>, which was used for approximating the solution of 𝐔 = 𝐅𝐖 in Sec. <ref>. Instead of introducing an L1-regularized problem as done in Eq. <ref>, we can directly solve this regression problem using the normal equations 𝐅^T 𝐅𝐖 = 𝐅^T 𝐔 . This equation could be solved using a linear solver to find 𝐖. However, in practice the 𝐅^T 𝐅 matrix is highly ill-conditioned and close to singular, therefore an L2 regularization should be added ( 𝐅^T 𝐅 + λ𝐈 ) 𝐖 = 𝐅^T 𝐔 , where λ is the regularization parameter. An increased λ provides a more robust linear system of equations but at the cost of reduced accuracy. Our preliminary investigation has shown that this formulation in certain cases produces more accurate results related to the training error. The OOD generalization error was better in most cases for the L1-regularized problem (except for test case 2). It should also be noted that the L2-regularized problem produces a dense solution where most integral equations in the library will be nonzero, and therefore a less interpretable model is produced. unsrt
http://arxiv.org/abs/2307.04037v2
20230708195151
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages
[ "E. C. Nunes" ]
cs.RO
[ "cs.RO" ]
Employing Drones in Agriculture: An Exploration of Various Drone Types and Key Advantages 1st Eduardo Carvalho Nunes Department of Engineering University of Trás-os-Montes and Alto Douro 5000-801, Vila Real, Portugal ORCID: 0000-0002-5345-8854 ===================================================================================================================================================================== This article explores the use of drones in agriculture and discusses the various types of drones employed for different agricultural applications. Drones, also known as unmanned aerial vehicles (UAVs), offer numerous advantages in farming practices. They provide real-time and high-resolution data collection, enabling farmers to make informed irrigation, fertilization, and pest management decisions. Drones assist in precision spraying and application of agricultural inputs, minimizing chemical wastage and optimizing resource utilization. They offer accessibility to inaccessible areas, reduce manual labor, and provide cost savings and increased operational efficiency. Drones also play a crucial role in mapping and surveying agricultural fields, aiding crop planning and resource allocation. However, challenges such as regulations and limited flight time need to be addressed. The advantages of using drones in agriculture include precision agriculture, cost and time savings, improved data collection and analysis, enhanced crop management, accessibility and flexibility, environmental sustainability, and increased safety for farmers. Overall, drones have the potential to revolutionize farming practices, leading to increased efficiency, productivity, and sustainability in agriculture. Drone, Agriculture, UAV § INTRODUCTION The use of drones in agriculture has gained significant attention in recent years due to their potential to revolutionize farming practices. Drones, also known as unmanned aerial vehicles (UAVs), offer a range of applications that can enhance efficiency, productivity, and sustainability in agriculture. One of the key advantages of using drones in agriculture is their ability to provide real-time and high-resolution data collection <cit.>. Drones equipped with cameras, sensors, and imaging technologies can capture detailed imagery of crops, soil conditions, and field topography <cit.>. This data can be used for crop monitoring, assessment, and precision agriculture practices <cit.>. By analyzing this data, farmers can make informed decisions regarding irrigation, fertilization, and pest management, leading to optimized resource utilization and improved crop yields <cit.>. Drones also play a crucial role in precision spraying and application of agricultural inputs <cit.>. With their ability to navigate through fields and deliver targeted treatments, drones can reduce chemical wastage, minimize environmental impact, and improve the efficiency of pesticide and fertilizer application <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and optimize resource utilization <cit.>. Furthermore, drones offer accessibility to inaccessible or inaccessible areas by traditional means <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. This enables farmers to monitor large farmland areas quickly and efficiently, reducing the time and labor required for manual inspections <cit.>. Drones can cover large farmland areas in a fraction of the time it would take using traditional methods, leading to cost savings and increased operational efficiency <cit.>. In addition to data collection and monitoring, drones can assist in mapping and surveying agricultural fields. They can create high-resolution maps and 3D models, providing valuable information for crop planning, land management, and resource allocation. Drones equipped with advanced sensors, such as LiDAR or hyperspectral cameras, can capture detailed data for precise analysis and decision-making <cit.>. This enables farmers to identify areas of nutrient deficiencies, optimize irrigation practices, and implement site-specific management strategies. The use of drones in agriculture is challenging. Regulations and licensing requirements for drone operation vary across countries and regions, and compliance with these regulations is essential to ensure safe and responsible drone use <cit.>. Additionally, drones' limited flight time and battery capacity can pose challenges in large-scale farming operations <cit.>. However, advancements in drone technology, such as improved battery life and payload capacity, are addressing these limitations and expanding the possibilities for drone applications in agriculture. § DIFFERENT TYPES OF DRONES USED IN AGRICULTURE In agriculture, different types of drones are used for various applications. These drones offer unique capabilities and functionalities that cater to specific agricultural needs. Some of the commonly used types of drones in agriculture include: * Multi-Rotor Drones: Multi-rotor drones (Figure <ref>), such as quadcopters and hexacopters, are popular in agriculture due to their maneuverability and stability <cit.>. They are equipped with multiple rotors that allow them to hover in place, fly at low altitudes, and capture high-resolution imagery. Multi-rotor drones are suitable for tasks that require close and contained object capture, such as monitoring crop health, detecting pests and diseases, and applying targeted treatments <cit.>. * Fixed-Wing Drones: Fixed-wing drones (Figure <ref>) have a wing-like structure and are designed to fly like airplanes <cit.>. They are known for their long-flight endurance and ability to cover large areas. Fixed-wing drones are commonly used for mapping and surveying agricultural fields, as they can fly faster and cover more considerable distances. However, they require a runway for takeoff and landing, which can be a limitation in specific agricultural settings. * Hybrid Drones: Hybrid drones (Figure <ref>) combine the features of multi-rotor and fixed-wing drones <cit.>. They can take off and land vertically like multi-rotor drones and then transition to fixed-wing flight for longer endurance and coverage <cit.>. Hybrid drones are suitable for applications that require both close-range imaging and large-scale mapping, providing flexibility and versatility in agricultural operations. * Thermal Imaging Drones: Thermal imaging drones (Figure <ref>) are equipped with thermal cameras that capture infrared radiation emitted by objects <cit.>. These drones are used in agriculture to monitor crop health, detect irrigation issues, and identify areas of heat stress or pest infestation <cit.>. Thermal imaging drones can provide valuable insights into the temperature distribution and thermal patterns in agricultural fields, aiding precision agriculture practices. * Spraying Drones: Spraying drones (Figure <ref>), also known as agricultural drones or crop dusting drones, are specifically designed for the targeted application of pesticides, fertilizers, and other agricultural inputs <cit.>. These drones are equipped with spraying systems that can accurately and efficiently deliver chemicals to crops, reducing the need for manual labor and minimizing chemical wastage <cit.>. Spraying drones offer precise and controlled applications, reducing environmental impact and optimizing resource utilization. * Surveillance Drones: Surveillance drones (Figure <ref>) are used in agriculture for monitoring and security purposes <cit.>. These drones are equipped with cameras and sensors that capture real-time video footage and imagery, allowing farmers to monitor their fields, livestock, and infrastructure remotely <cit.>. Surveillance drones can help detect unauthorized activities, track animal movements, and identify potential threats or risks in agricultural operations. * Mapping and Surveying Drones: Mapping and surveying drones (Figrue <ref>) are used to create high-resolution maps and 3D models of agricultural fields <cit.>. These drones have advanced sensors, such as LiDAR (Light Detection and Ranging) or photogrammetry cameras, to capture detailed and accurate data <cit.>. Mapping and surveying drones are valuable tools for precision agriculture, enabling farmers to analyze topography, monitor soil conditions, and plan efficient land management strategies. * Payload-Specific Drones: Drones are designed for specific agricultural applications besides the above types. For example, there are drones equipped with hyperspectral sensors for detailed analysis of crop health and nutrient content <cit.>. There are also drones with specialized sensors for monitoring soil moisture levels, detecting weed infestations, or assessing plant growth parameters <cit.>. These payload-specific drones (Figure <ref>) cater to specific data collection needs in agriculture. § ADVANTAGES OF USING DRONES IN AGRICULTURE Using drones in agriculture offers several advantages contributing to improved efficiency, productivity, and sustainability in agricultural practices. The advantages of using drones in farming are: * Precision Agriculture: Drones enable precision agriculture practices by providing high-resolution imagery and data collection capabilities <cit.>. They can capture detailed information about crop health, soil conditions, and pest infestations, allowing farmers to make informed decisions and apply targeted treatments <cit.>. This precision approach helps optimize resource utilization, reduce input wastage, and increase crop yields <cit.>. * Cost and Time Savings: Drones can cover large areas of farmland quickly and efficiently, reducing the time and labor required for manual inspections and data collection <cit.>. They can perform tasks such as crop monitoring, mapping, and spraying in a fraction of the time it would take using traditional methods <cit.>. This leads to cost savings by minimizing the need for manual labor and reducing the use of resources such as water, fertilizers, and pesticides <cit.>. * Improved Data Collection and Analysis: Drones equipped with various sensors, such as cameras, thermal imaging, and multispectral sensors, can collect a wide range of data about crops, soil, and environmental conditions <cit.>. This data can be used for detailed analysis and monitoring, enabling farmers to detect early signs of crop stress, nutrient deficiencies, or disease outbreaks <cit.>. The data collected by drones can be processed using advanced analytics and machine learning algorithms to generate actionable insights for better decision-making <cit.>. * Enhanced Crop Management: Drones provide real-time and up-to-date information about crop health, allowing farmers to implement timely interventions and optimize crop management practices <cit.>. For example, drones can help identify areas of the field that require additional irrigation or fertilization, enabling precise application and reducing waste <cit.>. They can also assist in monitoring crop growth, estimating yield potential, and predicting harvest times <cit.>. * Accessibility and Flexibility: Drones offer accessibility to areas that are difficult to reach or inaccessible by traditional means, such as steep slopes or dense vegetation <cit.>. They can fly at low altitudes and capture data from different angles and perspectives, providing a comprehensive view of the field <cit.>. Drones can be deployed quickly and easily, allowing farmers to respond rapidly to changing conditions or emergencies <cit.>. * Environmental Sustainability: Using drones in farming can contribute to environmental sustainability by reducing the use of chemicals and minimizing the environmental impact of agricultural practices <cit.>. Drones enable targeted spraying of pesticides and fertilizers, reducing the amount of chemicals applied and minimizing their dispersion into the environment <cit.>. This targeted approach helps protect beneficial insects, reduce water pollution, and promote ecological balance <cit.>. * Safety: Drones eliminate or reduce the need for farmers to physically access hazardous or difficult-to-reach areas, such as tall crops, steep terrains, or areas with potential safety risks <cit.>. This improves the safety of farmers and reduces the risk of accidents or injuries associated with manual labor <cit.>. § CONCLUSION Using drones in agriculture holds immense promise for revolutionizing farming practices and improving efficiency, productivity, and sustainability. The various types of drones available cater to specific agricultural needs, ranging from crop monitoring and assessment to precision spraying, mapping, and surveying. Drones provide real-time and high-resolution data collection, enabling farmers to make informed decisions regarding resource allocation and optimize crop management practices. They offer cost and time savings by reducing manual labor and minimizing the use of resources. The ability of drones to access inaccessible areas and provide comprehensive views of the fields enhances their usability and efficiency in large-scale farming operations. Furthermore, drones contribute to environmental sustainability by enabling targeted spraying, reducing chemical wastage, and minimizing the environmental impact of agricultural practices. The safety aspect of using drones must be considered, as they eliminate or reduce the need for farmers to access hazardous areas physically. Despite challenges such as regulations and limited flight time, advancements in drone technology are continually addressing these limitations. Overall, the advantages of using drones in agriculture are significant, and their integration into farming practices has the potential to transform the industry, leading to optimized resource utilization, improved crop yields, and sustainable agricultural practices. 00 10.1002/net.21818Otto, A., Agatz, N., Campbell, J., Golden, B. & Pesch, E. Optimization Approaches for Civil Applications of Unmanned Aerial Vehicles (UAVs) or Aerial Drones: A Survey. Networks. (2018) 10.1007/s41666-020-00080-6Nasajpour, M., Pouriyeh, S., Parizi, R., Dorodchi, M., Valero, M. & Arabnia, H. Internet of Things for Current COVID-19 and Future Pandemics: An Exploratory Study. Journal Of Healthcare Informatics Research. (2020) 10.3390/rs9010088Jakob, S., Zimmermann, R. & Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sensing. (2017) 10.3390/s20051487Gao, D., Sun, Q., Hu, B. & Zhang, S. A Framework for Agricultural Pest and Disease Monitoring Based on Internet-of-Things and Unmanned Aerial Vehicles. Sensors. (2020) 10.1109/access.2020.2982086Castellanos, G., Deruyck, M., Martens, L. & Joseph, W. System Assessment of WUSN Using NB-IoT UAV-Aided Networks in Potato Crops. Ieee Access. (2020) 10.1038/s41598-020-67898-3Santangeli, A., Chen, Y., Kluen, E., Chirumamilla, R., Tiainen, J. & Loehr, J. Integrating Drone-Borne Thermal Imaging With Artificial Intelligence to Locate Bird Nests on Agricultural Land. Scientific Reports. (2020) 10.3390/land10020164Ayamga, M., Tekinerdogan, B. & Kassahun, A. Exploring the Challenges Posed by Regulations for the Use of Drones in Agriculture in the African Context. Land. (2021) 10.3390/drones6070160Javan, F., Samadzadegan, F., Gholamshahi, M. & Mahini, F. A Modified YOLOv4 Deep Learning Network for Vision-Based UAV Recognition. Drones. (2022) 10.1109/access.2021.3130900Dutta, A., Roy, S., Kreidl, O. & Bölöni, L. Multi-Robot Information Gathering for Precision Agriculture: Current State, Scope, and Challenges. Ieee Access. (2021) 10.5937/ekonomika1804091sSpalević, Ž., Ilic, M. & Savija, V. The Use of Drones in Agriculture: ICT Policy, Legal and Economical Aspects. Ekonomika. (2018) 10.3390/app11052138Kim, S., Ahmad, H., Moon, J. & Jung, S. Nozzle With a Feedback Channel for Agricultural Drones. Applied Sciences. (2021) 10.5194/isprs-archives-xlii-2-789-2018Oliveira, R., Khoramshahi, E., Suomalainen, J., Hakala, T., Viljanen, N. & Honkavaara, E. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing. The International Archives Of The Photogrammetry Remote Sensing And Spatial Information Sciences. (2018) 10.1111/sum.12771Chen, Q., Li, L., Chong, C. & Wang, X. AI‐enhanced Soil Management and Smart Farming. Soil Use And Management. (2021) 10.1088/1757-899x/1259/1/012015Borikar, G., Gharat, C. & Deshmukh, S. Application of Drone Systems for Spraying Pesticides in Advanced Agriculture: A Review. Iop Conference Series Materials Science And Engineering. (2022) 10.1016/j.jairtraman.2020.101929Merkert, R. & Bushell, J. Managing the Drone Revolution: A Systematic Literature Review Into the Current Use of Airborne Drones and Future Strategic Directions for Their Effective Control. Journal Of Air Transport Management. (2020) 10.1371/journal.pone.0141006Lisein, J., Michez, A., Claessens, H. & Lejeune, P. Discrimination of Deciduous Tree Species From Time Series of Unmanned Aerial System Imagery. Plos One. (2015) 10.3390/drones5020041Krul, S., Pantos, C., Frangulea, M. & Valente, J. Visual SLAM for Indoor Livestock and Farming Using a Small Drone With a Monocular Camera: A Feasibility Study. Drones. (2021) 10.3390/agronomy11091809Huzaifah, M., Juraimi, A., Che'ya, N., Sulaiman, N., Manaf, M., Ramli, Z. & Motmainna, M. Using Remote Sensing and an Unmanned Aerial System for Weed Management in Agricultural Crops: A Review. Agronomy. (2021) 10.30657/pea.2021.27.10Dadi, V., Nikhil, S., Mor, R., Agarwal, T. & Arora, S. Agri-Food 4.0 and Innovations: Revamping the Supply Chain Operations. Production Engineering Archives. (2021) 10.22438/jeb/43/1/mrn-1912Verma, A., Singh, M., Parmar, R. & Bhullar, K. Feasibility Study on Hexacopter UAV Based Sprayer for Application of Environment-Friendly Biopesticide in Guava Orchard. Journal Of Environmental Biology. (2022) 10.1007/978-981-16-4369-9_25Kumaar, A. & Kumaar, A. GPS-Based Path Planning Algorithm for Agriculture Drones. (2021) 10.3390/agriculture13051075McCarthy, C., Nyoni, Y., Kachamba, D., Banda, L., Moyo, B., Chisambi, C., Banfill, J. & Hoshino, B. Can Drones Help Smallholder Farmers Improve Agriculture Efficiencies and Reduce Food Insecurity in Sub-Saharan Africa? Local Perceptions From Malawi. Agriculture. (2023) 10.1051/matecconf/202133502002Lee, C., Phang, S. & Mun, H. Design and Implementation of an Agricultural UAV With Optimized Spraying Mechanism. Matec Web Of Conferences. (2021) 10.1051/e3sconf/202338101048Zhichkin, K., Nosov, V., Zhichkina, L., Anichkina, O., Borodina, I. & Beketov, A. Efficiency of Using Drones in Agricultural Production. E3s Web Of Conferences. (2023) 10.1109/access.2019.2949703Farooq, M., Riaz, S., Abid, A., Abid, K. & Naeem, M. A Survey on the Role of IoT in Agriculture for the Implementation of Smart Farming. Ieee Access. (2019)
http://arxiv.org/abs/2307.04962v2
20230711015208
Intrinsically motivated graph exploration using network theories of human curiosity
[ "Shubhankar P. Patankar", "Mathieu Ouellet", "Juan Cervino", "Alejandro Ribeiro", "Kieran A. Murphy", "Dani S. Bassett" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SI" ]
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability The authors are affiliated with the Department of Electrical Engineering at Caltech. Emails: {jhajar,tkargin,hassibi}@caltech.edu. Joudi Hajar Taylan Kargin Babak Hassibi August 12, 2023 =========================================================================================================================================================================================================================== Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards. When the environment is naturally represented as a graph, how to guide exploration best remains an open question. In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory. The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment. We use these proposed features as rewards for graph neural-network-based reinforcement learning. On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training. Our method computes more efficiently than the greedy evaluation of the relevant topological properties. The proposed intrinsic motivations bear particular relevance for recommender systems. We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia. *Authors contributed equally. § INTRODUCTION Providing a task-agnostic incentive for exploration as an intrinsic reward has proven useful in a variety of reinforcement learning settings, even in the absence of any task-specific (extrinsic) rewards <cit.>. Termed curiosity in reference to the analogous drive in humans, prior formulations are based on different means of quantifying the novelty or surprisal of states encountered by an agent <cit.>. If states are represented as graphs, the task-agnostic motivation to explore can additionally be content-agnostic, depending only on the topological properties of the visited state subgraph. Leading theories of curiosity in humans are similarly content-agnostic, based on structural properties of a relational graph that connects atoms of knowledge without regard to their actual content <cit.>. Theories of curiosity attempt to describe the intrinsic motivations that underlie human decision-making when acquiring information through exploration. The information gap theory (IGT) argues that curiosity collects knowledge that regulates gaps in our understanding of the world <cit.>. Exposure to a small amount of novel information pushes an individual's uncertainty about the environment past an acceptable threshold, creating an information gap. Curious agents are driven to resolve the discrepancy by acquiring information to close the gap <cit.>. An alternative account, the compression progress theory (CPT), posits that information-seeking behavior is motivated to build increasingly compressible state representations <cit.>. Compression enables abstraction and improved generalization by emphasizing the essential latent structures of knowledge <cit.>. Information gap theory and compression progress theory provide optimization objectives for the human exploration of graph-structured environments. In this work, we demonstrate that network theoretic measurements of information gaps and compression progress can be meaningful exploration incentives for reinforcement learning (RL). We train agents that use graph neural networks (GNN) to explore graph-structured data while optimizing for gap creation and improved compression (Figure <ref>). Once trained, the agents navigate network structures to optimize certain topological features without regard to the content of the network. The agents can be used to modify statistics that are based on random walk processes on graphs. As an example, we use data of humans traversing spaces with natural graph structure—books and movies to review or Wikipedia pages to visit—to compute node centrality measures that best align with human navigation data. Our primary contributions are the following: * We adapt intrinsic motivations for human curiosity as reward functions for reinforcement learning. * We replace expensive reward computations with graph neural networks. Our method is computationally efficient and generalizes to shorter and longer exploratory walks and to smaller and larger environments than are seen during training. * We demonstrate that modifying measures of node centrality with curiosity-trained agents increases alignment with human behavior in real-world graph datasets without using any domain-specific feature information. § RELATED WORK Human curiosity as graph exploration. Curiosity in humans is commonly conceptualized as the intrinsic motivation to gather information from the environment <cit.>. Humans acquire information even when it is expensive to obtain <cit.> and may have no immediate tangible utility <cit.>, suggesting that exploration is inherently valuable. Recent work has expanded the acquisitional framing of curiosity with a more general connectional account. This perspective defines curiosity as an exploratory walk on a graph. Here, curiosity entails building a growing knowledge network by acquiring informational units as nodes and their relationships as edges <cit.>. The state of an individual's knowledge is viewed as the subgraph of the environment induced by the visited nodes <cit.>. Under this formulation, humans explore Wikipedia via trajectories with fewer information gaps and greater network compressibility than relevant null models <cit.>. Intrinsic motivations in reinforcement learning. The need for improved exploration has led reinforcement learning to incorporate curiosity-like intrinsic motivations into its algorithmic framework <cit.>. Exploration rewards in RL take several forms. At the core of all approaches is an inducement for the learning agent to seek novelty. Count-based approaches encourage visits to unfamiliar or infrequently visited states <cit.>. When the state space is large, enumerating the frequencies of visits to all possible states is prohibitively expensive. To overcome this challenge, neural density models derive uncertainty-based pseudo-counts <cit.>. A complementary perspective emphasizes model building and formulates curiosity rewards in terms of learning progress and surprisal <cit.>. For instance, in the prediction error approach—alongside an extrinsic task—the agent attempts to learn a model of the environment's dynamics. Curiosity rewards are proportional to the model error when predicting transitions between states. Memory-based methods assign rewards based on how different a newly visited state is from those stored in memory <cit.>. Instead of a prescriptive approach, parametric methods attempt to explicitly learn an intrinsic reward function <cit.>. In general, improved exploration is a means to an end, with intrinsic rewards supplementing extrinsic task-specific rewards. Graph combinatorial optimization and reinforcement learning. Combinatorial optimization entails selecting elements from a finite set of options such that the chosen subset satisfies an objective function <cit.>. Graph analyses often involve combinatorial optimization, with graph structure imposing constraints on the solution space. Recent work combines graph neural networks and reinforcement learning to construct solutions by incrementally adding nodes to a partial set <cit.>. First, a GNN constructs an embedding for the candidate solution; second, an agent, for instance, a deep Q-network (DQN), trained via RL, selects an action to expand the solution <cit.>. The two networks can be trained end-to-end with an optimization objective driving gradients for learning. This approach solves various graph combinatorial tasks, such as the traveling salesperson problem <cit.>, finding the maximum independent set <cit.>, or the minimum vertex cover <cit.>, and identifying isomorphic subgraphs <cit.>. Instead of uncovering nodes, GNNs can also sequentially collapse nodes into each other with implications for matrix multiplication <cit.>. GNNs, in combination with RL, have also been used to build and rewire graphs such that they possess high values of specific features of interest <cit.>. § METHODS Our goal is to train an agent to explore the environment while optimizing for a structural property of the visited subgraph. Consider a graph-structured environment 𝒢 = (𝒱, ℰ) with node set 𝒱 and edge set ℰ⊆𝒱×𝒱. Let 𝒱_T = {v_1, v_2, ⋯, v_T}⊆𝒱 be an ordered set of explored nodes at time T. The corresponding subgraph trajectory is the sequence 𝒮_1⊂𝒮_2⊂⋯⊂𝒮_T, wherein the t-th subgraph 𝒮_t is induced by the first t visited nodes. Specifically, given the graph 𝒢, the number of nodes to visit T, a graph feature function ℱ: 2^𝒢→ℝ, and a discount factor γ∈ [0,1], we seek an ordered set 𝒱^*_T such that ∑_t=1^Tγ^t-1ℱ(𝒮_t) is maximal. The feature function acts as an intrinsic reward to encourage exploration. The discounting parameter determines the extent to which the future values of ℱ factor into the decision-making at every step. Drawing inspiration from human curiosity, we adopt information gap theory and compression progress theory to design two functions, ℱ_IGT and ℱ_CPT. §.§ Network theories of curiosity Information gap theory views curiosity as an intrinsic motivation to regulate gaps in knowledge. For humans, new information pushes the level of uncertainty about the environment past an acceptable threshold, creating an uncertainty gap. Curiosity seeks to find information units to close this gap. By modeling the state of knowledge as a graph, we can characterize information gaps as topological cavities. In a graph, cavities can take several forms: dimension 0 cavities represent disconnected network components, whereas those of dimension 1 represent non-triangular loops of edges (Figure <ref>A). In order to identify and count topological cavities, a graph is first converted into a higher-order relational object known as a simplicial complex <cit.>. A simplicial complex is comprised of simplices. Geometrically, a d-simplex is a shape with flat sides formed by connecting d+1 points. For 0 ≤ d ≤ 2, by definition a node is a 0-simplex, an edge is a 1-simplex, and a filled triangle is a 2-simplex. We can construct a simplicial complex by assigning a d-simplex to each (d+1)-clique in a binary graph. In a simplicial complex, a d-dimensional topological cavity is identified as an enclosure formed by d-simplices that a higher-dimensional simplex cannot fill. We refer the reader to Refs. <cit.> for a more comprehensive treatment of algebraic topology. Given a simplicial complex, the d-th Betti number β_d counts the number of topological gaps of dimension d. Prior work examining human knowledge-network-building finds compelling evidence in support of information gap theory when gaps are conceptualized as 1-dimensional cavities <cit.>. In this work, at each time step t with a visited subgraph 𝒮_t, we assign rewards equal to β_1, ℱ_IGT = β_1(𝒮_t). Compression progress theory posits that curiosity is a drive to compress the state of knowledge <cit.>. During graph exploration, at each step t in the trajectory, the compression reward can be assigned as network compressibility <cit.>. Consider a subgraph 𝒮_t with t nodes and q edges, represented by a symmetric adjacency matrix M ∈^t × t. Information about the subgraph's structure can be encoded in the form of a random walk x = (x_1, x_2, … ). The walk sequence is generated by randomly transitioning from a node to one of its neighbors. Thus, for a random walk on 𝒮_t, the probability of transitioning from node i to node j is P_ij = M_ij/∑_jM_ij. Since the walk is Markovian, its information content (or its entropy) is given by H = -∑_iπ_i∑_j P_i jlog P_i j. Here, π_i is the stationary distribution representing the long-term probability that the walk arrives at node i, given by π_i = ∑_jM_ij/2q. Assigning nodes to clusters leads to a coarse-grained sequence y = (y_1, y_2, … ). The number of clusters n can be used to define a scale of the network's description s = 1 - n-1/t. For example, when n = t, the network is described at a fine-grained scale s = 1/t; at the other extreme, when n = 1 the network is described at the coarsest scale s = 1. At every description scale in between, it is possible to identify a clustering of nodes that minimizes the information rate (Figure <ref>B). After computing these optimal clusterings across all scales, we arrive at a rate-distortion curve R(s), representing a bound on the information rate as a function of the scale s. The compressibility C of the network is then given as the average reduction in the information rate across all scales <cit.>, C = H - 1/t∑_s R(s). Therefore, the compression reward is ℱ_CPT = C(𝒮_t), where C(𝒮_t) denotes the compressibility of subgraph 𝒮_t. §.§ Reinforcement learning for graph exploration We formulate the graph exploration problem as a Markov decision process (MDP) <cit.>: * States: The state is defined as the subgraph induced by the visited nodes at time t, 𝒮_t = 𝒢[𝒱_t]. We specify the initial state 𝒮_1 by randomly selecting a starting node v_1 ∈𝒱. Each state represents a partial solution to the broader sequential exploration task. * Actions: The agent can transition to any neighbor of the most recently visited node. We denote the neighborhood of a node v as 𝒩(v) = {u ∈𝒱|(v, u) ∈ℰ}. Therefore, given the state at time t, the set of available next nodes is 𝒜(𝒮_t) = 𝒩(v_t)\𝒱_t. If no nodes are available in the immediate neighborhood, we expand the action set to include all neighbors of the explored subgraph. * Transitions: Given the pair 𝒮_t and v ∈𝒜(𝒮_t), the transition to state 𝒮_t+1 is deterministic with P(S_t+1| S_t, v) = 1. * Rewards: The reward at time t is defined as R_t = ℱ(𝒮_t). We train RL agents using either ℱ_IGT or ℱ_CPT as the reward function. The policy π(v |𝒮_t) maps states to actions, fully describing the agent's behavior in the environment. At each step, the agent makes decisions using a value function Q(𝒮_t, v), which evaluates candidate nodes v ∈𝒜(𝒮_t) in the context of the currently explored subgraph. The function measures the total (discounted) reward that is expected to accumulate if the agent selects action v in state 𝒮_t and thereafter follows policy π. In turn, the policy can be viewed as behaving greedily with respect to the value function, π = max _v ∈𝒜(𝒮_t) Q(𝒮_t, v). Solving an MDP entails finding an optimal policy that maximizes the expected discounted sum of rewards. We parameterize the value function Q using a GNN Φ(·):𝒢→ℝ. GNNs build vector embeddings for nodes by iteratively aggregating their features with those from their local neighborhoods <cit.>. Each aggregation step is typically followed by a fully connected layer and a non-linear activation function. Depending on the number of rounds of aggregation, features from more distant locations in the graph can inform the embedding for each node. Specifically, we use the GraphSAGE architecture <cit.>, where at the l-th round of feature aggregation, the embedding for node u is given as, h_u^(l)=f^(l)(h_u^(l-1), h_𝒩(u)^(l-1))=g[θ_C^(l) h_u^(l-1)+θ_A^(l)Ã(h_𝒩(u)^(l-1))], where à represents the aggregation operator, g[.] is the activation function, and θ_C and θ_A are parameters for combination and aggregation, respectively <cit.>. We use the local degree profile (LDP) of each node as the initial set of features <cit.>. LDP comprises various features of a node's neighborhood, including its own degree, the minimum and maximum degrees of its neighbors, and the average and standard deviation of the degrees of its neighbors. We train GNNs for exploration using the DQN algorithm, with a replay buffer for experience sampling, a target network, and a decaying ϵ-greedy exploration rate <cit.>. Details of the full neural network architecture and the training process are included in the Supplementary Materials. §.§ Curiosity-biased node centrality Several graph theoretic quantities can be defined in terms of random walk processes on a graph. We can use agents trained to explore graphs to bias random walk processes and, by extension, the corresponding quantities. PageRank is a widely recognized algorithm that assigns node centrality scores to graph data <cit.>. The per-node score η can be interpreted as the stationary distribution of a random walk process on a network. With probability α, a random walker moves along an edge from node v_i to one of its neighbors. The probability of reaching a connected node v_j is P_ij. Alternatively, with probability 1-α, the walker jumps, or teleports, to a random node in the network. The probability of jumping to node v_k is q_k. Under conditions of irreducibility and aperiodicity <cit.>, the stationary distribution is given as ∑_i (I-α P_ij^t)η_i = (1-α)q_j. The PageRank algorithm follows a random walk that is entirely Markovian. Typically, the probability P_ij depends solely on the out-degree of v_i and, in the case of node-weighting, on the personalization vector q. Personalized PageRank biases the random walk process using q_k by taking into account nodes that are already visited in the network <cit.>. We can integrate agents trained to optimize for the exploration objectives described earlier into the PageRank algorithm. Specifically, given an already visited subgraph, we propose to modify transition probabilities using Q-values assigned to candidate nodes. Consider a non-Markovian random walker sitting at node v_l with a path history V_l= { v_1, ⋯,v_l-1,v_l}. The visited nodes in the path induce a corresponding subgraph 𝒮_l. Paths are built starting from the most recent initialization or teleportation event. We use a Q-value function trained to optimize for an objective ℱ to bias the walker. The transition probability from node v_l to node v_m can be re-defined as, P^ℱ_lm(𝒮_l) ≡(1-p_g)p_g^rank(Q(𝒮_l, v_m))-1/1-p^| 𝒜(𝒮_l)|, v_m ∈𝒜(𝒮_l), 0, otherwise, where rank(Q(𝒮_l, v_m)) is the rank of the Q-value for action v_m and p_g∈ [ 0,1] is a parameter that controls how likely the walker is to select actions greedily. To compute biased per-node PageRank values, we simulate a walker using P^ℱ_ij(𝒮_i) until probabilities converge. § EXPERIMENTS §.§ Exploration in synthetically generated networks We train a curiosity-based GNN agent to explore synthetically generated graph environments. Each environment is constructed to have N = 50 nodes. Each episode lasts for 10 steps and, therefore, consists of visits to 10 distinct nodes. We examine four synthetic graph models that exhibit a broad range of degree profiles and topologies <cit.>: * Erdös-Rényi (ER): The ER model produces random graphs by adding edges between nodes with probability p. We set p = 0.2. * Barabási-Albert (BA): Starting with a randomly connected skeleton of m nodes, the BA model, also known as the preferential attachment model, adds nodes sequentially. Each new node is connected to m existing nodes with a probability proportional to node degree. This “rich-gets-richer” growth scheme results in graphs with heavy-tailed degree distributions. We set m = 4. * Random geometric: Graph-structured environments, such as transportation networks or power grids, are embedded in physical space. Random geometric graphs model such environments by placing nodes within a unit cube of specified dimensionality. The model places nodes uniformly at random inside the cube. An edge connects a pair of nodes if the distance between the nodes is less than or equal to a radius value. For a 2-dimensional space, we set the radius value to 0.25. * Watts-Strogatz (WS): Many real-world networks possess a “small-world” topology, whereby distant nodes can be reached by a small number of hops from any node in the graph. The WS model creates graphs with a small-world topology by creating a ring graph and adding edges from each node to its k nearest neighbors. Each edge is then rewired at random with probability p. We set k = 4 and p = 0.1. For each of the four graph models, we build 100 training, 10 validation, and 10 testing environments. After training, we evaluate the GNN agent in the testing environments against four baseline approaches: * Random: Select a candidate node at random. * Greedy: For each candidate node, build a candidate state subgraph. Evaluate the reward function for each subgraph and select the node that results in the biggest one-step improvement. * Max Degree: Select the candidate node with the largest degree. * Min Degree: Select the candidate node with the smallest degree. The total average reward gathered by the different agents is presented in Table <ref>. For the IGT reward, in all graph models except for ER, the GNN outperforms the greedy agent. By contrast, the one-step-ahead greedy agent consistently performs best for CPT, with the GNN a close second. Baseline approaches broadly perform well compared to the GNN for CPT than they do for IGT. When exploring a graph with the IGT objective, adding a single node can close several topological gaps simultaneously, requiring careful consideration of options. By contrast, compressibility is less sensitive to the choice of node at each step due to its strong correlation with the clustering coefficient <cit.>. If exploring inside a cluster, neighbors of a node are likely to be neighbors of each other, lowering the likelihood that a single choice will significantly alter compressibility. For instance, the max degree baseline performs well for the CPT objective in random geometric graphs because high-degree nodes are centrally placed and surrounded by dense, highly clustered neighborhoods <cit.>. Barabási-Albert graphs, similarly, have highly clustered cores due to preferential attachment in their generative process <cit.>. Watts-Strogatz networks have high clustering when the edge rewiring probability is low. As a result, even random exploration in such topologies tends to occur inside clusters leading to greater compressibility. In support of this view, the minimum degree baseline, which is likely to select a node outside of a cluster, is typically further apart from the performance of the GNN compared to the other baselines. §.§.§ Trajectory length and environment size generalization After training the GNN agent to explore 10 nodes in random geometric graph environments of 50 nodes, we evaluate generalization performance for shorter and longer trajectories and smaller and larger environments. We test trajectory length generalization while holding environment size fixed at 50 nodes. For walks shorter and longer than 10 steps, the GNN performs comparably to the greedy agent for both IGT and CPT (Figure <ref>). We test environment size generalization by taking 10 steps on graphs that are smaller or larger than 50 nodes. The GNN agent outperforms the greedy agent in smaller environments. In larger environments, the GNN is superior to the greedy agent for IGT and exhibits comparable performance for CPT. In summary, the performance of trained GNNs does not degrade for settings outside the training regime. These results indicate that we can train GNNs for graph exploration in regimes where reward computations are relatively inexpensive due to the smaller size of subgraphs and expect them to scale to longer walks and larger networks. We also report generalization results for the other graph models in the Supplementary Materials. §.§.§ Time complexity Using graphs of different sizes, we evaluate the computational efficiency of our approach by comparing the wall time for a forward pass through the GNN with that for a greedy evaluation of the rewards. Figure <ref> displays results for random geometric synthetic graphs. The time for greedy evaluation of the topological features for both IGT and CPT grows quickly with subgraph size, whereas the GNN offers a faster alternative. Comparing the rewards for the two theories of curiosity, the information gap reward is significantly cheaper to evaluate compared to network compressibility. Therefore, in addition to approximating human intrinsic motivations for exploration, we find that the GNN offers a route to efficient computation of meaningful topological features of graphs. §.§ Alignment with human navigation of graph data Next, we evaluate the utility of curiosity-trained agents in predicting human behavior in graph-structured environments. To gather path-based information for our analyses, we use two types of real-world graph datasets. Reviews enable us to approximate consumer paths on a similarity graph of available content. We create two separate graphs, one comprising movies from the MovieLens dataset <cit.> and the other comprising books from the Amazon Product Reviews dataset <cit.>. We also examine a second type of dataset consisting of user paths on Wikipedia in the Wikispeedia game environment <cit.>. The three datasets are: * MovieLens: The MovieLens dataset consists of movie reviews <cit.>. We use IMDB user summaries and Word2Vec to construct vector embeddings for each movie. We build a graph environment by treating each movie as a node. For each movie, we use cosine similarity to add edges to the 20 most similar movies. * Amazon Books: The Amazon Product Reviews dataset encompasses reviews for diverse products <cit.>. To narrow our focus, we specifically extract and retain reviews associated with books. We filter out books with fewer than 150 reviews and limit our analysis to reviewers with at least 5 reviews. To represent each book as a distinct entity, we use Word2Vec-based vector embeddings. For each book, we add edges by identifying the top 20 most similar books based on their embeddings. * Wikispeedia: The Wikispeedia dataset consists of paths collected for a navigation game on Wikipedia <cit.>. In the game, users are presented with a starting article and a destination article and are tasked with reaching the destination article using hyperlinks within Wikipedia. Here, the underlying hyperlink structure of Wikipedia acts as the graph environment. We train GNNs for graph exploration in each of the three real-world environments for both information gap theory and compression progress theory. To incorporate person-specific data, the PageRank hop vector q is modified to be zero for all nodes except a user's n_burn-in most recently visited nodes <cit.>. We assign a uniform jump probability to the n_burn-in nodes, with q_k = 1/n_burn-in. Each graph feature function ℱ yields a PageRank vector η^ℱ_i. We combine these vectors linearly to obtain a final PageRank vector, denoted as η' such that η'(α, β̃, γ̃, δ̃) ≡β̃η_PR(α) + γ̃η_IGT(α) + δ̃η_CPT(α) where β̃^2 +γ̃^2 +δ̃^2 = 1 and η_PR is the score vector obtained using standard PageRank. To evaluate this approach, we optimize the set of variables α, β̃, γ̃, δ̃ using a training set of transitions. We then compare performance against unbiased PageRank, where only α is optimized. Formally, we generate two sets of human transitions denoted as 𝐒_test and 𝐒_train. These sets consist of portions of human trajectories with a length of n_burn-in+1. Next, we perform Bayesian optimization to compute parameters â and â_bias for the two sets, â ≡max_α∑_S∈𝐒_trainrank_v_burn-in(η_PR(α) ) â_bias ≡max_α, β̃, γ̃, δ̃∑_S∈𝐒_trainrank_v_burn-in(η'(α, β̃, γ̃, δ̃ ) ). To evaluate our method, we calculate the ratio of improvement on the test set, given as r_𝐒_test ≡∑_S∈𝐒testrank_v_burn-in(η'(â_bias ) ) / ∑_S∈𝐒testrank_v_burn-in(η_PR(â) ) . Table <ref> displays r_𝐒_test in percentage terms for the three datasets when considering curiosity theories alone or in combination. Across all combinations, improvement ranges from 2.9% to 32.2%, indicating that incorporating curiosity for the biasing of walks is useful. Depending on the dataset, the IGT or CPT-trained agent performs better with similar values. In the Wikispeedia data, however, CPT leads to improvement that is nearly four times higher than IGT. The books and movie datasets exhibit similarities since the selection mechanism in both is not directed towards a goal. By contrast, the Wikispeedia dataset involves goal-directed navigation. Figure <ref>B shows the improvement in predicting the transitions made by humans in the Wikispeedia dataset. We compare percentile ranks for each transition made by the human when making predictions with and without biasing the random walk process. We find that biased curiosity assigns higher percentile ranks to actual transitions than standard PageRank. We also analyze the distance from the initial node with respect to time for individual random walk trajectories (Figure <ref>C). In general, observed differences between the biased walkers are small and fall within the standard deviation of the walk process. These observations suggest that the differences observed in the biased PageRank algorithm are not solely attributable to changes in the diffusion properties of the random walks. § LIMITATIONS In our implementation, when computing an embedding for the state subgraph, the GNN does not distinguish candidate nodes from those already visited. Appending a one-hot vector to differentiate candidates could potentially lead to improved performance. This approach would allow the network to recognize and, therefore, prioritize candidate nodes during the decision-making process. The PageRank algorithm includes various hyperparameters that can be further fine-tuned; for instance, p_g or refining the distribution P^ℱ_ij(𝒮_i) that is used to select nodes for the walker. § DISCUSSION We can use intrinsic motivations that underpin human curiosity to train neural networks to explore graph-structured environments with diverse topological structures. Our approach generalizes to longer exploratory walks and larger environments than are seen during training. Importantly, relying only on the structure of the visited subgraph and without any domain-specific node features, we find that our method is more predictive of human behavior than PageRank centrality for several real-world graph datasets.
http://arxiv.org/abs/2307.04094v1
20230709043319
Class-Incremental Mixture of Gaussians for Deep Continual Learning
[ "Lukasz Korycki", "Bartosz Krawczyk" ]
cs.LG
[ "cs.LG", "I.5.0; I.5.1" ]
Class-Incremental Mixture of Gaussians for Deep Continual Learning Lukasz Korycki Virginia Commonwealth University [email protected] Bartosz Krawczyk Virginia Commonwealth University [email protected] August 12, 2023 ================================================================================================================================================== Continual learning models for stationary data focus on learning and retaining concepts coming to them in a sequential manner. In the most generic class-incremental environment, we have to be ready to deal with classes coming one by one, without any higher-level grouping. This requirement invalidates many previously proposed methods and forces researchers to look for more flexible alternative approaches. In this work, we follow the idea of centroid-driven methods and propose end-to-end incorporation of the mixture of Gaussians model into the continual learning framework. By employing the gradient-based approach and designing losses capable of learning discriminative features while avoiding degenerate solutions, we successfully combine the mixture model with a deep feature extractor allowing for joint optimization and adjustments in the latent space. Additionally, we show that our model can effectively learn in memory-free scenarios with fixed extractors. In the conducted experiments, we empirically demonstrate the effectiveness of the proposed solutions and exhibit the competitiveness of our model when compared with state-of-the-art continual learning baselines evaluated in the context of image classification problems. § INTRODUCTION While the initial research done in the domain of continual learning from stationary data was, in large part, oriented towards task-incremental solutions, more recent works attempt to address generalized cases consisting of purely class-incremental and data-incremental (also known as domain-incremental) settings <cit.>. These scenarios are usually more universal but also more challenging and restrictive mainly due to the lack of task or even class labels. Such settings make many of the previously proposed solutions practically useless, for example, the methods based on memory-free regularization <cit.>, which are not capable of discriminating between older and new classes, even if they address the catastrophic forgetting problem <cit.>. Although the most standard experience replay methods can be effectively applied in the class-incremental scenarios <cit.>, there has been also a search for alternative approaches that could provide natural capabilities required for such cases. A significant group of methods can be identified based on their reliance on centroids (or prototypes) combined with the nearest-centroid classification methods <cit.>. Since centroids can be independently added to the classifier, they are examples of methods that can be very smoothly incorporated into class-incremental scenarios, offering almost no interference in the latent space. In this work, we explore an advanced version of these alternatives by proposing integration of the gradient-based Gaussian mixture model with a class-incremental deep continual learning framework, called MIX. In fact, it requires us to tackle three major problems at the same time: (i) gradient-based mixture training, (ii) combining it with a trainable deep feature extractor and, finally, (iii) making it suitable for class-incremental scenarios. To achieve these goals, we introduce a set of dedicated losses, configurations and methods, providing a probabilistic classifier on top of a feature extractor and within a model capable of learning end-to-end. This opens many potential research directions that could exploit the well-modeled statistical properties of Gaussians. In addition to that, we show that our class-incremental mixture model, analogously to the centroid-driven algorithms, is characterized by some inherent properties useful in continual learning scenarios. They allow it for much better separation of concepts at the level of the classification module, leading to significant improvements in memory-free scenarios when pre-trained extractors are used. Through an extensive empirical study, we analyze different configurations of our method, provide the reader with some intuition about its parameters and show its competitiveness in the context of other continual learning algorithms. § RELATED WORKS Continual learning: In continual learning, our focus should be on effective incorporation of the arriving data and retention of the acquired knowledge <cit.>. The main problem that learning algorithms will encounter here is catastrophic forgetting <cit.>. The most straightforward approaches involve replaying instances of previously seen tasks or classes while learning new ones <cit.>. Instead of putting instance-level constraints on the learning directions, we can apply direct adjustments to the loss using dedicated regularization terms. The most commonly used approach involves utilizing the knowledge-distillation loss <cit.> combined with standard cross-entropy <cit.> or maintaining importance weights to distinguish parameters that are crucial for the retention <cit.>. These methods generally cannot be used in more realistic class-incremental or data-incremental scenarios (if they do not use memory buffers), since they cannot learn how to discriminate new instances from the older ones <cit.>. Other approaches may employ masking to isolate parameters per task to keep them static when learning new ones <cit.>, use dynamic structures to expand the network for new concepts <cit.>, utilize ensemble techniques <cit.> or meta-learning and hypernetworks <cit.>. Finally, interesting alternative approaches focus on hybridizing the neural networks with different machine learning methods, e.g. decision trees <cit.> or centroid-driven algorithms <cit.>. The latter group of methods has been found especially useful in one-class-incremental scenarios, since, as mentioned in the introduction, centroids can be stored independently per class, allowing for natural class-incremental learning without additional interference at the level of a classifier. In this work, we follow these approaches and replace basic centroids learned separately from the feature extractor with more complex end-to-end mixture models. Mixture optimization: Various techniques can be applied for the task of fitting the mixture model to given data. The most standard approach utilizes the EM algorithm, which can be realized in both offline and online settings <cit.>. While EM provides a stable framework for learning the mixtures – in terms of mathematical constraints and convergence – it is critically limited when it comes to working with high-dimensional data and feasible memory consumption <cit.>. On top of that, this algorithm is intrinsically incapable of being fully integrated with neural networks, preventing it from achieving joint end-to-end deep learning and benefiting from dedicated features. An alternative approach involves gradient-based optimization <cit.>. This method has been proved to be able to provide more scalable and flexible algorithms capable of working in challenging scenarios with high-dimensional data and in online settings. Most importantly, the gradient-based approach naturally enables combining the model as a classifier with a trainable deep feature extractor <cit.>, allowing for extending the optimization process with the input space adjustments. Methods utilizing such a compound learning process showed much evidence of its usability in offline and unsupervised scenarios, while at the same time encouraging researchers to develop further extensions and improvements <cit.>. Given all of the characteristics, we decided to use this approach in our scenario of continual learning. § MIXTURE OF GAUSSIANS FOR CLASS-INCREMENTAL LEARNING Formally, the general goal of our work is to incrementally learn a classification model defined as ϕ^(t): 𝒳→𝒞 that can effectively incorporate subsequent class batches ⟨ (X^(1), c=1), (X^(2), c=2), ..., (X^(t), c=t)⟩, where X^(t) contains instances x only for a given class c. After t classes the model ϕ^(t) should aim at minimizing the loss for the current class c=t and all previously observed ones: ℒ^(t) = ∑_c=1^t∑_n=1^N_cℒ^(c)(ϕ^(t)(x_n^(c))), where x_n^(c)∈X^(c) and ℒ^(c) can be any supervised loss. Additionally, since we are interested in deep learning, we define the whole trainable model as a tuple ϕ^(t)=⟨ℱ^(t), 𝒢^(t)⟩ consisting of a feature extractor ℱ^(t) and a classifier 𝒢^(t) jointly aggregating knowledge from t classes. The model makes prediction by classifying the features provided from the extractor ϕ^(t)(x)=𝒢^(t)(ℱ^(t)(x))=𝒢^(t)(x̂). In this work, we aim at employing the mixture of Gaussians as a jointly trained incremental classifier. Although the model learns from dedicated features x̂, in the next section, we use x for the sake of simplicity of notation. §.§ Generic supervised mixture model Formally, in a standard unsupervised setting the density for a given point 𝐱 can be expressed using a multivariate normal distribution defined as: 𝒩(𝐱|μ_k, Σ_k) = 1/√((2π)^D|Σ_k|) × exp(-1/2(𝐱-μ_k)^TΣ_k^-1(𝐱-μ_k)), where μ and Σ are its mean and covariance, and D is the size of the input (number of dimensions). The Gaussian mixture models (GMM) have been designed to approximate more complex multivariate densities by decomposing them into K components: p(𝐱) = ∑^K_k=1ω_k𝒩(𝐱|μ_k,Σ_k), where each of them is defined using a single Gaussian defined above and ω_k are their weights. The combined model, equipped with more degrees of freedom, should be capable of providing more accurate expressions of the overall observed distributions than a simpler approach utilizing only a single component. In such a framework, the fitting of the mixture to given data X is based on minimizing the loss defined using the log-likelihood function: ℒ̅ = -log p(X|ω,μ,Σ) = -1/N∑^N_n=1log∑^K_k=1ω_k 𝒩(𝐱_n|μ_k,Σ_k), where we adjust the free parameters of the model – means μ, covariance matrices Σ and weights ω. To adapt the given framework to supervised scenarios we can simply specify a separate mixture model for each class c: p(𝐱 | c) = ∑^K_k=1ω_k^(c)𝒩(𝐱|μ_k^(c),Σ_k^(c)), and focus on minimizing the aforementioned loss also per class ℒ̅^(c): ℒ̂ = ∑_c=1^Cℒ̅^(c) = -log∑_c=1^C p(X^(c)|ω^(c),μ^(c),Σ^(c)), where X^(c) are N_c class-specific observations. In continual learning we should aim at minimizing the interference of current updates with previously created models to alleviate the detrimental effect of catastrophic forgetting. Therefore, it is worth mentioning here that GMMs create such an opportunity by allowing for maximizing the log-likelihood only for a currently learned class through ℒ̅^(c). It provides a perfect separation at the level of the classification model. §.§ Mixture optimization for class-incremental deep learning In order to apply gradient-based learning to GMM in class-incremental deep learning scenarios, we have to address several different issues. Some of them are common for all GMM models using gradient-based learning, while others are specific for the class-incremental deep learning settings. In general, we say that our goal is to optimize the class-incremental joint model ϕ^(t)=⟨ℱ^(t), 𝒢^(t)⟩, defined at the beginning of Sec. <ref>, using some supervised loss ℒ. Since we set 𝒢^(t) = 𝒩^(t), where 𝒩^(t) is a whole GMM model, we have ϕ^(t)(x)=𝒩^(t)(ℱ^(t)(x)). The trainable parameters are weights ∂ℒ / ∂W and biases ∂ℒ / ∂b for the extractor, and means ∂ℒ / ∂μ, covariance matrices ∂ℒ / ∂Σ and component weights ∂ℒ / ∂ω for the classifier. All of the subsequent paragraphs focus on designing optimization in the classifier (mixture) space, as it was introduced in Sec. <ref>. §.§.§ Loss design Max-component: It has been shown that optimizing the full loss ℒ̅^(c) given in Eq. <ref> may lead to some numerical instabilities, especially for high-dimensional data <cit.>. To address this issue a max-component approximation can be used. This approach is very straightforward. Since all p(x|c,k) in Eq. <ref> are positive, any component provides a lower bound for the whole sum used in ℒ̅^(c). If for every point x_n we find a component providing the highest log-likelihood and sum all of them, we will get the largest (max-component) lower bound <cit.>: ℒ^(c)_max = -1/N_c∑^N_c_n=1max_klog(ω_k^(c)𝒩(𝐱^(c)_n|μ_k^(c),Σ_k^(c))). Since we can state that ℒ^(c)_max≥ℒ̅^(c), we are able to minimize ℒ̅^(c) by focusing only on ℒ^(c)_max. It is also worth mentioning that just like the general formula given in Eq. <ref> may eliminate the interference with previously learned classes, the max-component approximation can limit the same issue at the level of class components, for example, in data-incremental scenarios <cit.>, making this approach a natural candidate for continual learning settings. Inter-contrastive loss: All of the introduced losses are limited to scenarios either without a feature extractor or with a fixed pre-trained one. Unfortunately, if we operate in a setting where we can modify the input space of the mixture model and we utilize any of the aforementioned metrics relying entirely on maximizing log-likelihood, we will inevitably end up with a local minimum that for a joint model ϕ^(t) exists for ∀x(𝒢^(t)(x) = 0). This issue can be solved by incorporating an inter-contrastive loss that will distance representations for different classes. We define the loss as: ℒ^(c)_ie = 1/N_cmax_j ≠ c∑^N_c_n=1max_klog(ω_k^(j)𝒩(𝐱^(c)_n|μ_k^(j),Σ_k^(j))), which boils down to finding the closest component in other classes, and then optimizing against the class that on average is the closest to the one currently being considered. We keep the log-likelihood to ensure a similar numerical space of loss values as the one for the positive part given in Eq. <ref>. However, now one should notice that minimizing such a loss may very easily destabilize learning since optimization will gravitate towards ℒ̅^(c)_ie→ -∞ preventing the model from actually fitting to the class examples. To avoid it we introduce a tightness bound τ that clips the contrastive loss value at some pre-defined point ℒ^(c)_ie(τ) = max(τ, ℒ^(c)_ie). This basically means that we stop the decrease of the contrastive loss below the given bound, allowing for a more significant contribution of the actual fitting part ℒ^(c)_max. We parametrize the τ value with a simple linear transformation τ = p̅_max^(c) - 1/τ_p, where p̅_max^(c) is the average maximum density value observed across all class components (can be obtained on-the-fly) and τ_p is a tunable hyperparameter that takes values between ( 0,1 ⟩. Such a loss can provide effective discrimination between components of different classes, as shown for an example in Appendix A. Diverse components: While all of the introduced techniques and modifications ensure reliable discrimination between components of different classes, they do not consider differentiation between components of the same class or their quality. In fact, even in offline gradient-driven settings without dynamic feature extraction it is common to obtain mixtures reduced to a single component per class with all the others practically meaningless, e.g., due to zeroed weights <cit.>. In scenarios with a trainable extractor, this problem becomes even more significant as it is very easy for the optimizer to focus on maximizing log-likelihood from a single component, as both mixture model and flexible extractor lack constraints to prevent this. While in standard scenarios this problem can be successfully addressed with a good initialization method, e.g., using k-means <cit.>, we observed that it was not enough in our case. As a consequence, we introduced two elements to the learning process. Regionalization – before learning each class, we first divide it into K clusters using the k-means clustering. Then we force each component to fit only to the data from its cluster called a region ℛ^(c)_k. This replaces the max-component loss ℒ^(c)_max defined in Eq. <ref> with: ℒ^(c)_reg = -∑^K_k=11/N_k∑_x∈ℛ^(c)_klog(ω_k^(c)𝒩(𝐱|μ_k^(c),Σ_k^(c))). Intra-contrastive loss – the regionalization approach is necessary yet not sufficient to provide sufficient diversification between same-class components. The reason for it is the same as for discrimination between different classes, as described in the previous paragraph. Analogously to the inter-contrastive loss, we add the intra-contrastive loss with the tightness bound τ: ℒ^(c)_ia(τ) = ∑^K_k=1max( τ, max_m ≠ k1/N_k ×∑_x∈ℛ_klog(ω_m^(c)𝒩(𝐱|μ_m^(c),Σ_m^(c))), which for each class region pushes away other same-class components that on average are closest to the currently considered one, based on the regionalization conducted in the previous step. Obviously, one can define separate τ for the inter- and intra-contrastive loss. Such an approach can effectively increase the diversity of the same-class components, as given for an example in Appendix A. However, this approach imposes a hard constraint on how the representation and mixture may look, which limits the flexibility of the whole model. Regardless of these concerns, this method can still effectively improve the overall performance of a multi-component model over a method without the proposed improvement, as we will show in our extensive experiments. Final component-based losses: To summarize, we distinguish two component-based losses. One uses the max-component approach (MC): ℒ_mc = ∑_c=1^tℒ_max^(c) + ℒ_ie^(c)(τ_ie), while the second loss adds the regionalization technique with the intra-contrastive part (MCR): ℒ_mcr = ∑_c=1^tℒ_reg^(c) + β(ℒ_ie^(c)(τ_ie) + ℒ_ia^(c)(τ_ia)). Cross-entropy loss: Last but not least, we can also attempt to directly optimize the whole standard loss ℒ̂ given in Eq. <ref>, using a high-level supervised wrapper loss, e.g., based on cross-entropy (CE). In such a case, our loss is defined as: ℒ_ce = -∑_c=1^t∑_n=1^N_cy_n^(c)logŷ_n^(c), where y is a one-hot target vector and ŷ_n^(c) comes from the softmax function ŷ_n^(c) = e^p_n^(c)/∑_c=1^te^p_n^(c) and p_n^(c)=p(x_n|c) is a density value for a given class produced by the mixture model accordingly to Eq. <ref>. §.§.§ Constraints Other issues that have to be addressed when using gradient-based mixture training are the mathematical constraints that have to be enforced to preserve a valid mixture model. This is required since gradient-based learning does not constrain the possible values for means, covariance matrices and weights, and the last two have to remain in a specific range of values. Component weights: For the GMM model its component weights ω_k have to sum up to one: ∑_k=1^Kω_k=1. To ensure that the effective weights satisfy this requirement we simply train auxiliary free parameters ω̂_k and use the softmax-based normalization ω_k = e^ω̂_k/∑_j=1^Ke^ω̂_̂ĵ to obtain required values <cit.>. Covariance matrices: For a general case, the covariance matrices of the GMM model should be symmetric positive definite v^TΣv > 0 for all nonzero vectors v. This can be enforced using the Cholesky decomposition <cit.> Σ = AA^T, where A is a triangular matrix with positive diagonal values a_ii > 0 and, at the same time, our trainable proxy parameter. To enforce positive diagonal values, after each gradient-based update we clamp them with a_ii = min(a_ii, d_min) using some predefined d_min value. Finally, we also consider a case of a mixture using only the diagonal of the covariance – variance σ, which we control using the same clamp-based approach σ_i = min(σ_i, d_min). §.§ Memory buffer In our work, we consider the class-incremental scenario with strictly limited access to previously seen observations (classes). Therefore, in all of the introduced losses we use all available data for the currently learned class t, while for the others we sample from the memory buffers ℳ_c that store an equal number of examples per each previously seen class. On the other hand, if the feature extractor is pre-trained and static we could remove the inter-contrastive loss and even get rid of the memory buffer, allowing for memory-free training, as we will show in our experimental study. The memory buffer is needed in a general case when we assume the joint training of the whole model. §.§ Classification Finally, in the presented model, the classification of an instance x_n can be performed using two approaches, either utilizing the softmax function ŷ_n^(c) = e^p_n^(c)/∑_c=1^te^p_n^(c), where p_n^(c) = p(x_n|c), or by taking the weighted support of the closest component ŷ_n^(c) = max_kω_k^(c)𝒩(𝐱_n|μ_k^(c),Σ_k^(c)). We will empirically show that these methods work best with specific losses designed in the previous sections. § EXPERIMENTAL STUDY In our experiments, we empirically explore all of the introduced methods and parameters and put our method in the performance context of different state-of-the-art baselines. We show how our model performs in end-to-end scenarios and with a pre-trained extractor, compared with other solutions. For more specific details regarding data, configurations and results, please refer to Appendix A and B, as well as to our repository containing source code for our methods and all experiments: (please check the source code provided in the supplementary materials, a public URL will be added later). All of the experiments were conducted using 4 GPUs (Tesla V100) that were part of an internal cluster. §.§ Setup For the purpose of the evaluation we selected commonly utilized image classification datasets that were turned into class-incremental sequences by presenting their classes subsequently to the models <cit.>. We used: MNIST, FASHION, SVHN, CIFAR and IMAGENET datasets using various variants (number of classes, pre-trained features). For the analysis of different configurations of our model we used shorter sequences. We extended them with the longer benchmarks for the comparison with baselines. In the final section of this work, we compared our class-incremental Gaussian mixture model (MIX-MCR, MIX-CE) with other classifiers dedicated for continual learning scenarios. We considered: standard experience replay (ER) <cit.>, experience replay with subspaces (ERSB) <cit.>, centroid-based iCaRL <cit.>, two gradient-based sample selection methods (GSS and A-GEM) <cit.>, experience replay combined with knowledge distillation and regularization (DER) <cit.>, and two purely regularization-based approaches – LWF <cit.> and SI <cit.>. Most of the algorithms were implemented as wrappers of the source code provided in <cit.> under MIT License. For the last two we used their modifications adjusted for single-task learning <cit.>. As our lower bound we used a naively learning net (NAIVE), and for the upper bound we present results for the offline model (OFFLINE). We evaluated the presented methods in a class-incremental setting, where all of the classes were presented to the models subsequently and were not shown again after their initial appearance. We measured the accuracy of a given algorithm after each class batch, utilizing holdout testing sets, and then, based on <cit.>, used it to calculate the average incremental accuracy over the whole sequence: Ω_all = 1/T∑_t=1^Tα_t, where α_t is the model performance after t classes and T=C is the total number of classes. In addition to the whole aggregation, for the final comparison, we provided these values after each batch to present a more complete perspective of the obtained results. §.§ Results In this section, we present and describe all of the results that were obtained for the experiments introduced in the previous paragraphs. The first part consists of the analysis of different configurations of MIX, while the second one focuses on a comparison with other class-incremental algorithms. Loss and classification: We analyzed different combinations of the proposed losses and classification methods. Based on Fig. <ref>, we can make three major observations. Firstly, the softmax classification works significantly better with the CE loss, and max-component can be more efficiently paired with MC and MCR than softmax. It was evident for almost all cases (except for MC on CIFAR10) and resulted in almost 0.15 difference on average between softmax and max-component for CE, and about 0.05 for MC and MCR. Secondly, the MCR loss performed better than MC, showing consistent improvements, especially for more complex datasets like SVHN, CIFAR10 or IMAGENET10, which resulted in more than 0.1 for a difference on average. This demonstrate that the regionalization and intra-contrastive loss are capable of providing meaningful improvements over simpler MC loss utilizing only max-component and inter-contrastive elements, and that ensuring higher diversity among class components can be beneficial to the model. Finally, we can see that CE with softmax could provide very similar results as MCR with max-component, which means that the general GMM learning formula, wrapped with a high-level supervised loss, can be sometimes as useful as more complex MCR without the need for tuning additional parameters. One drawback of using CE, however, is the fact that it does not model the Gaussian mixtures well (see Appendix B for additional visualizations). The CE loss does not really have to fit the mixtures to the data since it is enough for it to ensure high classification quality. We can also observe a similar behavior for the MC loss. It may be prohibitive if one wants to obtain a reliable description of the latent space. The MCR loss achieves both objectives at the same time: high classification accuracy and high quality of the mixture models for features. This may be important if someone requires interpretable models or would like to extend the proposed algorithm with some Gaussian-oriented techniques that MCR may enable. Furthermore, we believe that analyzing its probabilistic properties in detail could be a part of incremental works built on top of the mixture model. They could utilize its well-defined characteristics, e.g. by proposing new mixture-based losses. Tightness: In Fig. <ref>, we presented a grid of values for the average incremental accuracy per each pair of inter- and intra-tightness for every dataset. One can clearly see that imposing the constraint (tightness) on the inter- and intra-contrastive loss values is beneficial to the learning process. Most of the benchmarks required τ_p, ie at the level of 0.0001 or 0.001 and slightly higher intra-tightness τ_p, ia around 0.001 or 0.01 to achieve the best results. At the same time, one should notice that imposing too high inter-tightness (0.01) leads to abrupt deterioration of quality, which is a result of blocking the contrastive part of the loss from pushing components of different classes from each other. The influence of setting too high intra-tightness is less important since we may simply end up with a single component that can still be effectively used for classification. The examples for FASHION, given in Fig. <ref> and <ref>, show how increasing the inter-tightness (the first one) and intra-tightness (the second one) affects learned representations and mixture models. We can observe the positive impact of the constraint and the potential for sweet spots providing a good balance between differentiating components between each other and fitting them to the actual data. It is evident that too low values introduce critical instabilities to the learning process (very high contrastive loss values overwhelming the fitting part), while too high thresholds lead either to the decline of discriminative properties of the model or degenerate solutions. Baseline comparison: In the second section of our experimental study, we placed our algorithm in the class-incremental performance context by comparing it with the introduced baselines (Fig. <ref>). First of all, we can see that the MIX-MCR variant performed better than the MIX-CE for most of the datasets, while being very close to it for the longer sequences (difference between less than 0.01 and 0.03). This proves that MIX-MCR is capable of providing not only a better representation (mixture) model but also that it is more reliable from the accuracy perspective. This also means that it is worth trying to maximize the quality of the produced Gaussian models as an alternative to high-level cross-entropy for classification. Secondly, although our model cannot be distinguished as the best classifier (being worse than iCaRL on average, with a difference equal to about 0.04), it is, at the same time, reliably competitive when compared with the remaining baselines (ER, GSS, DER) with a difference about 0.01 and less than 0.03. Also, it does not fall into the same pitfalls as either the weakest replay method (A-GEM) or the regularization-based ones (LWF, SI), outperforming them by almost 0.4 for accuracy on average. We can see that MIX could be found among the best models for MNIST, FASHION, IMAGENET10, IMAGENET20A and IMAGENET20B, especially at the end of the datasets, providing relatively reliable performance throughout the whole sequences. On the other hand, it struggled with catching up with the best replay methods for SVHN and CIFAR-based datasets showing that there is still a potential for improvements when it comes to predictive accuracy. The overall very poor performance of LWF and SI (but also A-GEM), which were not much better than the NAIVE approach, confirms the observations made in other publications that the regularization-based methods cannot handle the most challenging 1-class-incremental scenarios without memory buffers <cit.> even after improvements proposed in <cit.>. We can also see that the for the scenarios with end-to-end training the models were much closer (0.01-0.3) to the OFFLINE upper bound for the shorter sequences (MNIST, FASHION, SVHN and IMAGENET10, except for CIFAR10) than for the longer ones (IMAGENET20A, IMAGENET20B, CIFAR20) with differences between 0.4-0.5, which shows that all of the state-of-the-art methods still struggle with bridging the gap between incremental learning and offline optimum. Finally, the results for the memory-free scenarios with pre-trained models, given in the last row of Fig. <ref>, exhibit the main strength of the MIX algorithm. Since in these scenarios, it does not use the inter-contrastive loss, it can perfectly separate the incremental learning process for each class, preventing catastrophic forgetting at the level of the classifier. As a result, it does not have to rehearse the previous concepts at all (ℳ_c=0) while still being able to conduct very effective learning producing results very close to the OFFLINE upper bound (difference between about 0 and 0.1), regardless of the quality of the extractor (pre-trained on 10 and 20 or 100 and 200 classes). The MIX-MCR method outperforms all of the baselines for all cases except for IMAGENET200-PRE20, for which only iCaRL was able to provide slightly higher accuracy, even though they had a small advantage of having approximately one example per class in the buffer. It is not a coincidence that practically only iCaRL is close to our method on average (worse by about 0.1), since it uses a similar paradigm in the classification layer by storing prototypes/centroids that are used for classification. All of the remaining algorithms cannot handle the memory-free scenario effectively, producing solutions worse by at least 0.2 on average. This can be a crucial property when one has to consider, for example, data privacy issues or mobile and edge computing. All of the presented observations, conclusions and recommendations can be also found in a condensed form at the end of Appendix B. § SUMMARY In this work, we introduced a class-incremental mixture of Gaussians model (MIX) for deep continual learning. We proposed different variants of the algorithm to make it suitable for gradient-based optimization and, through an extensive experimental study, we exhibited its practical configurations and capabilities in the context of other state-of-the-art continual learning models. In our future research, we will focus on replacing the regionalization approach with a more flexible method that do not assume any pre-training structure and allows the gradient-based procedure to fully explore potential solutions, e.g. annealing <cit.>, and on removing the static tightness hyperparameter to increase flexibility even more – it could be more beneficial to either find a better (parameter-free) distance function or propose an adaptive threshold. It is also an open question whether we can effectively train a gradient-based mixture using a full covariance matrix. Finally, we could consider some kind of hybridization of the mixture models with the feature extractor to benefit from the capabilities of the former to limit interference with previously learned concepts by utilizing max-component losses. All of these potential improvements combined could provide significant performance gains in the class-incremental continual learning scenarios. ieee_fullname § APPENDIX §.§ Data We used: MNIST, FASHION, SVHN, CIFAR10 and IMAGENET10 – a subset of the tiny IMAGENET200, to gain deeper insights into our method while conducting experiments with hundreds of different configurations. Then, we extended this set with CIFAR20 – the coarse-grained version of CIFAR100, IMAGENET20A and IMAGENET20B – larger subsets of IMAGENET200 – to benchmark our method against other algorithms. For the experiments involving fixed extractors, we used pre-trained features to construct four additional sequences – CIFAR100-PRE10, CIFAR100-PRE100, IMAGENET200-PRE20 and IMAGENET200-PRE200, which consisted of features extracted for CIAFR100 and IMAGENET200, using extractors trained on 10, 20, 100 and 200 classes of the original datasets. The summary of the used benchmarks is given in Tab. <ref>. Details of the feature extractors can be found in the next section. §.§ Model configurations In the first section of our experiments, we explored different configurations of our algorithm, which can be mostly seen as an ablation study. Firstly, we evaluated different losses (CE, MC and MCR) combined with different classification methods (softmax, max-component). Secondly, we checked different settings for the tightness bound parameter τ_p by evaluating a grid of values for inter-tightness and intra-tightness – we considered τ_p ∈⟨1e-06, 1e-05, 0.0001, 0.001, 0.01⟩ for both. Thirdly, we analyzed how assuming different numbers of components affects the classification performance on different datasets. We used K ∈⟨1, 3, 5, 10, 20⟩. Then we checked if it is better to maintain a whole covariance matrix or only its variance (FULL, VAR). Finally, we evaluated different learning rates for the extractor and GMM part, using α_ℱ∈⟨1e-07, 1e-06, 1e-05, 0.0001, 0.001⟩ and α_𝒢∈⟨1e-05, 0.0001, 0.001, 0.01, 0.1⟩, to check whether it may be beneficial to configure them separately, and different memory sizes ℳ_c ∈⟨8, 64, 128, 256, 512⟩ to analyze how our method exploits limited access to class examples. While evaluating specific parameters we kept others fixed. For our base configuration we chose a setup that was capable of providing performance comparable with a standard experience replay. We used the MCR with max-component as our loss and classification method, K=3, τ_p,ie=0.002, τ_p,ia=0.01, β=0.5, α_ℱ=0.0001, α_𝒢=0.001 and d_min=0.001 with only variance stored per each component. We assumed a modest memory buffer per class ℳ_c=256 and matched the size of a memory sample per class with the training batch size. The model was trained for 10 (MNIST, FASHION) or 25 epochs per class, with 32 (IMAGENET) or 64 instances in a mini-batch. §.§ Algorithms Based on the observations made in the first section of the experiments, in the final evaluation we used two variants of our algorithm: MIX-CE and MIX-MCR with τ_p,ie=0.0001, τ_p,ia=0.001, α_ℱ=0.0001, α_𝒢=1e-05 and, once again, d_min=0.001 with only variance maintained per each component. The only parameter that we tuned per each dataset was the number of components K. We used Adam as the optimizer. For the memory-free scenarios with pre-trained extractors, we turned off the inter-contrastive loss to minimize interference with previously learned classes. The main parameters of the baselines methods were set based on the original papers and other literature, including empirical surveys or works containing vast empirical studies <cit.>. For all memory sampling methods we matched the memory sampling size with the training batch size. For ERSB we used 10 centroids per class each containing up to either 25 or 15 instances to match the total memory size. DER used α_d=0.5, for LWF we set the softmax temperature T=2 and progressively increased its distillation coefficient as suggested in <cit.>, and SI used λ =0.0001. All of the methods utilized the Adam optimizer with a learning rate α=0.0001 as we did not observe any significant differences when changing this parameter. Analogously to the configuration section, all of the algorithms, including ours, were trained for 10 (MNIST, FASHION) or 25 epochs per class, using 32 (IMAGENET) or 64 instances per mini-batch. The offline models were trained for either 50 or 100 epochs, until they achieved a saturation level. The memory buffer was set to ℳ_c=128 (IMAGENET) or ℳ_c=256 for methods supporting memory per class (ER, ERSB, iCaRL), and ℳ=C·128 or ℳ=C·256 for the remaining ones (GSS, A-GEM, DER), where C was the total number of classes. The latter group was equipped with reservoir buffers <cit.>. For the experiments with pre-trained extractors we wanted to check the memory-free scenario, therefore we set ℳ_c=0 for our methods and ℳ_c=1 or ℳ=C for others, since most of them could not be run without storing any examples. All of the algorithms, including different configurations of our method, were combined with feature extractors. For MNIST and FASHION we used a simple CNN with two convolutional layers consisting of 32 (5x5) and 64 (3x3) filters, interleaved with ReLU, batch normalization and max pooling (2x2). For SVHN and IMAGENET we utilized ResNet18, its modified version for CIFAR10 and CIFAR20, and ResNeXt29 for CIFAR100 <cit.>. The classification layers consisted of the default configurations. Finally, for our method, ER, ERSB, A-GEM and DER we disabled batch normalization, since, consistently with <cit.>, we observed a significant difference in performance when those layers were turned off for the given methods. As mentioned in Sec. <ref>, for the memory-free scenarios, the extractors were pre-trained on either 10, 20, 100 or 200 classes of CIFAR100 and IMAGENET200. For this setting we trained all the models for 20 epochs per class. Results for the offline model were either obtained by us (learned from scratch for IMAGENET20A, IMAGENET20B and fine-tuned models for IMAGENET200), or by referring to other publications <cit.>. § APPENDIX §.§ Additional visualizations Fig. <ref> presents an example of a single-component class-incremental mixture model learned with the inter-contrastive loss. Fig. <ref> demonstrates the effectiveness of training a multi-component model with the intra-contrastive loss and regionalization. As mentioned in the main document, the CE loss can often achieve similar predictive performance even if its mixture models are not really fitting the data (Fig. <ref>). We can see it when compared with MC for K=1 or MCR for both K (Fig. <ref> and <ref>). Furthermore, the model produced for MC with K=3 clearly shows that it is incapable of effectively utilizing multiple components for the same class. Please notice that only the Gaussians in the middle actually cover some data points, while the remaining components are completely unrelated to the observed data. These are examples of the degenerate solutions. While for FASHION this loss could still, analogously to CE, provide similar performance as MCR (the components in the middle are fitted to the data and they are sufficient to model it), the observed desynchronization of components results in its weaknesses for more complex problems. The MCR loss can provide high quality of predictive performance and of the produced mixture models. §.§ Additional configurations Number of components: Tab. <ref> presents how many components were required to obtain the best solutions per each dataset for the given settings. We can observe that for simpler datasets (MNIST, FASHION) using a single component per class for sufficient and that introducing additional ones led to slightly worse performance, most likely due to the fact of fitting to simple concepts and overcomplicating the optimization problem. On the other hand, more complex benchmarks (SVHN, CIFAR10, IMAGENET10) preferred access to more components per class, which could provide significant improvements, e.g., for SVHN the difference between K=1 and K=10 was almost 0.3. While for these experiments we set the learning rate slightly higher for the GMM model (0.001) than for the extractor (0.0001), we observed that when the former used rate lower than the latter (as suggested by the results for learning rates that will be presented below), the optimal K tended to be lower on average. It is possible that if GMM is dominant it prefers having more flexibility (components), while when the extractor has a higher learning rate it may be more effective in adjusting representations to lower numbers of components. Covariance: Results presented in Tab. <ref>, unequivocally show that our gradient-based MIX can much better adapt to data if it maintains only the variance of the covariance matrix (better by almost 0.3 when compared with full covariance). It is not surprising since previous publications related to the gradient-based GMMs for offline settings suggested a similar thing <cit.>. Most likely, working with a full covariance matrix leads to less stable loss values, and many more free parameters (especially if the feature space is high-dimensional) likely cause problems with convergence. Learning rates: Analogously to the experiments for tightness, in Fig. <ref> we presented the grid of results for different extractor (horizontal) and mixture (vertical) learning rates. The obtained results suggest that the former part is more important – once the optimal rate is set (0.0001 for the given settings) tuning the latter seems less significant, although overall it should be set to a similar or slightly lower value. Memory size: Finally, if we look at the results of class-incremental learning using different memory sizes, given in Fig. <ref>, we will see that MIX can effectively utilize larger buffers and that it seems to be quite memory-dependent, especially for SVHN where the difference between subsequent sizes ranged from 0.1 to 0.2. Still, the gap was much smaller for all of the remaining datasets. While this characteristic of the algorithm may be problematic (the fewer examples we need, the better), it is still valid that if we can use a pre-trained extractor, the whole model does not need to use the memory buffer at all. §.§ Lessons learned Based on the theoretical and empirical analysis presented for this work we can conclude the following. * Class-incremental learner. Regardless of many combined challenges, it is possible to successfully hybridize the gradient-based mixture models on top of convolutional feature extractors, and use them in class-incremental end-to-end continual learning scenarios. The presented results show that MIX is capable of providing competitive results when compared with well-known incremental baselines. * Dedicated losses. It has been shown that the training of the mixture models combined with dynamic feature extractors requires the inter-contrastive loss to effectively distinguish components of different classes from each other. In addition to that, to ensure diversity among same-class components and avoid degenerate solutions, such techniques as regionalization combined with the intra-contrastive loss are required. We showed that not only do the proposed approaches deliver what was intended, but also that they can translate into significant performance gains for more complex datasets. Finally, although the more generic high-level cross-entropy loss may provide good solutions in many cases, only the most advanced variant (MIX-MCR) delivers both high predictive performance and high quality of generated mixture models, which may be important from the perspective of interpretability or potential Gaussian-based extensions. * Effective tightness. The tightness bound plays a crucial role in stabilizing the mixture learning procedure. Setting the optimal values of inter- and intra-tightness leads to striking a balance between pushing different components from each other and actually fitting them to the data. Intuitively, the inter-tightness prefers slightly lower values than intra-tightness. * Recommended configurations. By analyzing other different hyperparameter settings and combinations of our methods we could observe that: (i) the CE loss works much better with the softmax classification method, while MC and MCR should be combined with the max-component approach, (ii) different numbers of components may be required for different data and different learning rates may also affect the optimal number, (iii) maintaining only the diagonal of the covariance matrices leads to more stable optimization and better results, (iv) the learning rate for the feature extractor dominates over the one for the mixture model, and that (v) MIX is quite memory-dependent in general end-to-end scenarios. * Memory-free scenarios. At the same time, MIX is capable of learning without a memory buffer if we use a fixed pre-trained extractor and disable the contrastive loss that is not needed in this case. Our method stands out as the best model for such class-incremental scenarios which can be very important if there are any data privacy concerns or strict memory limits.
http://arxiv.org/abs/2307.04752v1
20230710175622
Ogg's Torsion conjecture: Fifty years later
[ "Jennifer S. Balakrishnan", "Barry Mazur" ]
math.NT
[ "math.NT", "math.AG" ]
Ogg's Torsion conjecture: Fifty years later 10pt Jennifer S. Balakrishnan and Barry Mazur (with an appendix by Netan Dogra) 10pt Andrew Ogg's mathematical viewpoint has inspired an increasingly broad array of results and conjectures. His results and conjectures have earmarked fruitful turning points in our subject, and his influence has been such a gift to all of us. Ogg's celebrated Torsion Conjecture—as it relates to modular curves—can be paraphrased as saying that rational points (on the modular curves that parametrize torsion points on elliptic curves) exist if and only if there is a good geometric reason for them to exist.[B.M.: And here's just one (tiny) instance of Ogg's jovial and joyful way of thinking: As Tate and I recorded in one of our papers <cit.>: “Ogg passed through our town" and mentioned that he had discovered a point of order 19 on the Jacobian of X_1(13) allowing us to feel that that Jacobian was “not entitled to have" more than 19 points." ] 10pt < g r a p h i c s > § AN OVERVIEW Let K be a number field, and denote by G_K its absolute Galois group, i.e. G_K (K̅/K). A basic question in the arithmetic of abelian varieties over number fields is to classify (up to the natural notion of isomorphism) pairs (A; Cα↪A(K̅)) where * A is a (polarized) abelian variety defined over K, * C is a finite abelian group with a G_K-action, and * α is a G_K-equivariant injection. These are the three basic parameters in this general question, and you have your choice of how you want to choose the range of each of them. For example, you can: * allow the “C"s to run through all cyclic finite groups with arbitrary G_K-action; and A to range through all abelian varieties with a specified type of polarization. Equivalently, you are asking about K-rational cyclic isogenies of abelian varieties, or * restrict to finite “C"s with trivial G_K-action, in which case you are asking about K-rational torsion points on abelian varieties. * You might also vary over a class of number fields K—e.g., number fields that are of a fixed degree d over a given number field k, * and, of course, fix the dimension of the abelian varieties you are considering. § `GEOMETRIZATION' OF THE PROBLEM If you organize your parameters appropriately you can `geometrize' your classification problem by recasting it as the problem of finding K-rational points on a specific algebraic variety. In more technical vocabulary: you've framed a representable moduli problem—and the algebraic variety in question is called the moduli space representing that moduli problem. § SOME CLASSICAL EXAMPLES—MODULAR CURVES Fixing N a positive integer and sticking to elliptic curves, the moduli spaces for rational torsion points or cyclic isogenies are smooth curves defined over : torsion points of order N: Y_1(N) ⟶ X_1(N)[d] cyclic isogenies of degree N: Y_0(N) ⟶ X_0(N) The elliptic curves defined over K possessing a K-rational point of order N are classified by the K-rational points of the affine curve Y_1(N)—and X_1(N) is the smooth projective completion of Y_1(N) given by the adjunction of a finite set of `cusps'. And similarly, the classification of elliptic curves defined over K possessing a K-rational cyclic isogeny of degree N is given by the K-rational points of the affine curve Y_0(N)—with X_0(N) being the corresponding smooth projective completion. 30pt § THE GEOMETRIC FORMULATION COMES WITH A NUMBER OF SIDE-BENEFITS. Here are two: * If, say, the curve X_0(N) is of genus 0—noting that one of the cusps (∞) is defined over , it follows that there is a rational parametrization of that curve over which gives us a systematic account (and parametrization); that is, a K-rational parametrization of cyclic N-isogenies of elliptic curves—for any K. * If it is of genus greater than 0, one has a -rational embedding (sending the cusp ∞ to the origin) X_0(N) ↪ J_0(N) of the curve in its Jacobian, which allows us to relate questions about K-rational cyclic N-isogenies to questions about the Mordell-Weil group (of K-rational points) of the abelian variety J_0(N). Besides being able to apply all these resources of Diophantine techniques, there are the simple constructions that are easy to take advantage of. For example, if you have a `moduli space' ℳ whose K-rational points for every number field K provides a classification of your problem over K, then, say, for any prime p the set of K-rational points of the algebraic variety that is the p-th symmetric power of ℳ —denoted ^p(ℳ)— essentially classifies the same problem ranging over all extensions of K of degree p. As an illustration of this, consider cyclic isogenies of degree N and noting that the natural -rational mapping ^p(X_0(N)) ⟶ J_0(N) given by (x_1,x_2,…, x_p) ↦ Divisor class of [∑_ix_i - p·∞] has linear spaces as fibers, we get that the classification problem of all cyclic N-isogenies of elliptic curves over all number fields of degree p is geometrically related, again, to the Mordell-Weil group of J_0(N) over . A particularly nice example of this strategy carried out in the case of the symmetric square ^2 of Bring's curve is given in the appendix by Netan Dogra. Bring's curve is the smooth projective genus 4 curve in ℙ^4 given as the common zeros of the following system of equations: x_1 + x_2 + x_3 + x_4 + x_5 = 0 x_1^2 + x_2^2+ x_3^2 + x_4^2 + x_5^2 = 0 x_1^3 + x_2^3+ x_3^3 + x_4^3 + x_5^3 = 0. It has no real points and thus no rational points. However, there are a number of points defined over (i), such as (1: i: -1: - i: 0). A natural question is thus if one can find all quadratic points on Bring's curve. Dogra proves that all quadratic points are defined over (i) and produces the complete list of (i)-rational points. Siksek gave a symmetric Chabauty method <cit.>, a variant of the Chabauty–Coleman method (see Section <ref>) for symmetric powers of curves. Symmetric Chabauty has been used and extended in various ways to determine quadratic points on numerous modular curves X_0(N) <cit.>. Box, Gajović, and Goodman <cit.> further developed a “partially relative” symmetric Chabauty method to study cubic and quartic points on several modular curves X_0(N). § ANDREW OGG'S TORSION CONJECTURE(S) (1973) Torsion in algebraic groups—even if not in that vocabulary—has played a fundamental role since Gauss's Disquisitiones Arithmeticae (1801), the structure of roots of unity (torsion in the multiplicative group) being a central concern in the development of modern number theory.[See Umberto Zannier's expository article <cit.>, Torsion in algebraic groups and problems which arise.]Andrew's Torsion Conjectures taken in broad terms can be formulated in terms of “the geometrization(s)," as just described—i.e., in terms of -rational points of modular curves—and the Mordell-Weil groups of abelian varieties (i.e., of their Jacobians): §.§ -Rational torsion Conjecture 1 (Ogg): An isomorphism class {C} of finite groups occurs as the torsion subgroup of the Mordell-Weil group of some elliptic curve (defined over ) if and only if the modular curve that classifies this problem is of genus zero[ A form of this conjecture was made by Beppo Levi in his 1908 ICM address in Rome. See <cit.> which gives a wonderful account of the story of Beppo Levi's engagement with (and his important results about) the arithmetic of elliptic curves—all this being even before Mordell proved that the group of rational points of an elliptic curve over is finitely generated. Levi considers the tactic of producing multiples of a rational point on an elliptic curves {n· P} n=1,2,3,… a “failure" if it loops finitely—i.e., if P is a torsion point; his aim is to classify such “failures." ] . Put in another way: an isomorphism class occurs if and only it is expected to occur; i.e., if it necessarily occurs, as a consequence of the ambient geometry—this view being a continuing guiding inspiration for number theory.By `geometry' one means the (algebraic) geometry of the curve X_0(N). For example, Andrew's article <cit.> discusses the curious case of X_0(37) which has two noncuspical -rational points, these being the images of the hyperelliptic involution (a non-modular involution) applied to the two cusps, both cusps being -rational[ See Section <ref> below.]. Andrew comments: As Mazur and I are inclining to the opinion that Y_0(N) has no -rational points except for a finite number of values of N, we are certainly interested in knowing when this sort of thing is going on, and in putting a stop to it if at all possible. §.§ -rational cyclic isogenies There are two different proofs of Conjecture 1. A major step in one of these proofs of Conjecture 1 is the full classification of -rational cyclic isogenies of prime degree; this is proved in <cit.>: Let N be a prime number such that some elliptic curve over admits a -rational N-isogeny. Then N=2, 3, 5, 7, 13 ( the genus zero cases) or N=11, 17, 19, 37, 43, 67, or 163. This result was followed by a sequence of papers of M.A. Kenku (<cit.>) that extends the classification to cyclic isogenies of any degree: The -rational cyclic isogenies of degree N of elliptic curves defined over only occur—and do occur—if 1 ≤ N ≤ 19 or if N=21, 25, 27, 37, 43, 67, or 163. Following in the spirit of Ogg's original view of torsion points, all of these N-isogenies can be given `geometric reasons' for existing; e.g., the 37-isogenies `come by' applying the hyperelliptic involution (it is non-modular!) to the cusps of X_0(37). 20pt §.§ Rational torsion points on the Jacobians of modular curves Let J_0(N) denote the Jacobian of X_0(N). Noting that the cusps of X_0(N) map to torsion points of J_0(N), denote by C_0(N) ⊂ J_0(N)_ tors⊂ J_0(N) the subgroup generated by those cusps. 90pt Cusps in X_0(N)[d][dr] C_0(N)[d] X_0(N)[d] J_0(N)_ tors[r]^⊂ J_0(N) we have another, seemingly quite different type of conjecture: Conjecture 2: Let N be a prime number. We have: C_0(N)=J_0(N)_ tors() ⊂ J_0(N)() Put in another way: there are no `unexpected' -rational torsion points in J_0(N): they all come from cusps. Conjectures 1 and 2 are known. For Conjecture 1, see in <cit.> and also <cit.>. For Conjecture 2, see <cit.>. (Also see the broad survey of rational torsion results in Andrew Sutherland's <cit.>). That these conjectures are interlinked is a long story, as we discuss in Section <ref>. §.§ Conjecture 1 Letting C_n denote the cyclic group of order n, the complete list of possible (isomorphism classes) of finite groups that occur as torsion subgroups of the Mordell-Weil group of -rational points of elliptic curves are * C_n with 1≤ n ≤10, and also C_12, and * the direct sum of C_2 with C_2m, for 1≤ m ≤ 4. All these torsion groups occur infinitely often over , since the corresponding modular curves are all genus zero curves possessing a rational point.[See <cit.> where it is proved that each of these groups appears as a possible torsion group over any quadratic field.] Thanks to the work of Loïc Merel <cit.>, Joseph Oesterlé, Pierre Parent <cit.> and others, we have neat explicit upper bounds for the order of torsion points on elliptic curves over number fields of degree d. For surveys of this work, see <cit.> and <cit.>. Conjecture 1 having been completely resolved in the case of elliptic curves, has inspired more general uniform boundedness expectations for rational points; e.g., for abelian varieties A over number fields K: conjectures that the order of the torsion group of an abelian variety over a number field can be bounded in terms of the dimension of the variety and the number field; and still stronger versions: that the torsion is bounded in terms of the dimension of the variety and the degree of the number field. Moreover, it is striking how few additional isomorphism classes of K-rational torsion subgroups of elliptic curves can occur in elliptic curves over quadratic and cubic number fields K: §.§ Torsion on elliptic curves over quadratic number fields Let K range through all quadratic number fields, and E all elliptic curves over these fields. Then the torsion subgroup E(K)_ tors of E(K) is isomorphic to one of the following 26 groups: * C_n for 1≤ n ≤ 18, n 17, * the direct sum of C_2 with C_2m for 1≤ m≤ 6, * the direct sum of C_3 with C_3m for m=1,2, * C_4⊕ C_4. §.§ Torsion on elliptic curves over cubic number fields Let K range through all cubic number fields, and E all elliptic curves over these fields. Then the torsion subgroup E(K)_ tors of E(K) is isomorphic to one of the following 26 groups: * C_n for 1≤ n ≤ 18, n 17, * the direct sum of C_2 with C_2m for 1≤ m≤ 7, * C_20, C_21. There exist infinitely many -isomorphism classes for each such torsion subgroup except for C_21. In this case, the base change of the elliptic curve with LMFDB label https://www.lmfdb.org/EllipticCurve/Q/162/c/3/162.c3 to (ζ_9)^+ is the unique elliptic curve over a cubic field K with K-rational torsion group isomorphic to C_21. §.§ Conjecture 2 expanded * The order of the C_0(N) had been computed for square-free N thanks to Kubert and Lang <cit.>, and Takagi <cit.>. In this case (i.e., N square-free) the set of cusps are -rational. * Ohta <cit.> has proved a generalization of Ogg's conjecture in the context of square-free N. That is, he proved that the p-primary parts of J_0(N)_ tors() and of C_0(N) are equal for p ≥ 5 and p=3 if 3 doesn't divide N. Related to this, see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, <cit.>. And very recently the PNAS article <cit.> (Another look at rational torsion of modular Jacobians) by Ken Ribet and Preston Wake appeared, giving another approach to this issue. * In the more general context of N not squarefree, the cuspidal subgroup of J_0(N) may not consist entirely of rational points; nevertheless: Conjecture 2^*: J_0(N)_ tors() =C_0(N)()⊂ C_0(N). §.§ Conjecture 2 further expanded Now let X (over ℚ) denote either X_0(N) or X_1(N) for some N ≥ 1. Let 𝒥 be the Jacobian of X, and 𝒞⊂𝒥 the finite étale subgroup scheme of 𝒥 generated by the cusps. Let K/ℚ be the field cut out by the action of Galois on 𝒞. Thus there's an exact sequence 0→(ℚ̅/K) →(ℚ̅/ℚ)→(𝒞(ℚ̅)). Define the cuspidal defect of X to be the cokernel of 𝒞(ℚ̅)= 𝒞(K) ↪𝒥(K)_ tors. Conjecture 2^**: The `cuspidal defect' of either X listed above is trivial. § THE CONNECTION BETWEEN RATIONAL TORSION ON ELLIPTIC CURVE AND RATIONAL TORSION ON ABELIAN VARIETIES RELATED TO ELLIPTIC CURVES The easiest way to explain this is to follow the ideas of the proof of Conjecture 1 in <cit.>, rather than the ideas in the earlier and quite different proof given in <cit.>. To set things up, let N be a prime number such that X_0(N) is of genus greater than 0 and let J_/ ℤ the Néron model of the Jacobian of X_0(N) over , and X_0(N)^smooth_/ ℤ ι↪ J_/ ℤ be the smooth locus of the Zariski closure of X_0(N)_ in J_/ ℤ, the embedding being defined by sending the cusp “∞"—viewed as ℤ-valued section e∈ X_0(N)_/ ℤ—to the `origin section' of J_/ ℤ.An elliptic curve E with a cyclic isogeny of degree N over ℚ is represented by a noncuspidal Spec ℤ-valued section, x, of X_0(N)^smooth_/ ℤ and hence (via ι) also of the Néron model J_/ ℤ of the Jacobian of X_0(N) over . Suppose that such a rational point x exists, and denote by x̅ its image under the Atkin-Lehner involution w_n: X_0(N)→ X_0(N), the involution that exchanges the two cusp sections“0" and “∞" of X_0(N). Neither x nor x̅ are cuspidal sections of X_0(N)^smooth_/ ℤ. The articles <cit.> and <cit.> construct and discuss a specific smooth group scheme A_/ ℤ that is an optimal quotient[ The group scheme A is the relevant Eisenstein quotient—cf. loc.cit. The term `optimal quotient' means that the kernel of J_/ ℤ→ A_/ ℤ is a connected group scheme.] of J_/ ℤ for which these two properties are proven:(1) The generic fiber A_/ ℚ of A_/ ℤ is an abelian variety with finite Mordell-Weil group—i.e., A(ℚ) consists of rational torsion–and hence the image under f of any ℤ-valued section of X_0(N)^smooth_/ ℤ is either trivial, or else generates a cyclic finite flat subgroup of A_/ ℤ;and:(2) The following diagram 80ptx[r][dr] X_0(N)^smooth_/ ℤ[r]^ ⊂[dr]^f J_/ ℤ[d] e[r] X_0(N)^smooth_/ ℤ[r]^ f A_/ ℤ has the property that * the mapping f:X_0(N)^smooth_/ ℤ⟶ A_/ ℤ is formally smooth along the cuspidal section e, and * the diagram70pt X_0(N)[r]^f [d]^w_N A[d]^-1 X_0(N)[r]^f A commutes where “-1" denotes the involution z ↦ z^-1. So, by (i), f is formally smooth along both cusp sections. It follows that the image α f( x) ∈ A(ℤ) is either * the section defining the origin in the group scheme A_/ ℤ or else * is the generating section of a (nontrivial) cyclic finite flat subgroup scheme over ℤ, call it: 𝒢_/ ℤ⊂ A_/ ℤ. By the classification of such group schemes we have that either 𝒢_/ ℤ is a constant (nontrivial ) group scheme, or else 𝒢_/ ℤ≃μ_2 (μ_2 ⊂𝔾_m being the kernel of multiplication by 2 in the multiplicative group scheme 𝔾_m). These possibilities also hold, of course, for the `conjugate section' α̅ f(x̅) ∈ A(ℤ): it is either the trivial section or it generates a finite flat group scheme 𝒢̅_/ ℤ⊂ A_/ ℤ that is either a constant group scheme or μ_2. Neither α nor α̅ are the trivial section of A_/ ℤ. Since f is formally smooth along the cuspidal sections if α or α̅ were the trivial section we would be led to a contradiction, as illustrated by the diagram 10pt10pt < g r a p h i c s > in that the image of the two depicted sections would converge onto the origin section of A contradicting formal smoothness along e. So α and α̅ are generators of nontrivial group schemes 𝒢 and 𝒢̅ respectively, these being either constant or μ_2. * If 𝒢 and 𝒢̅ are constant group schemes, then α and α̅ are sections of A over disjoint (as schemes) from the trivial section of A and therefore x and x̅ are disjoint (as schemes) from the cuspidal sections of X_0(N)_/ ℤ. It follows that the elliptic curves E and E̅ that are classified by x and x̅ have potentially good reduction everywhere. * And if α or α̅ generates a subgroup isomorphic to μ_2, since μ_2 is étale outside the prime 2 it follows that E or E̅ would have potentially good reduction except for the prime 2. Even though this is the start of a sketch of the proof of Conjecture 1 in <cit.>, from what we've just described, we can prove The only prime numbers N for which there exist elliptic curves over ℚ with rational torsion points of order N are: N=2,3,5,7. First note that X_1(N) is of genus 0 for N=2,3,5,7 so there are infinitely many elliptic curves with rational torsion points of order N for these primes. That list of primes and 13 are precisely the primes for which X_0(N) is of genus 0. Since the curious prime N=13 is taken care of by <cit.> where it is proven that there are no rational points of order 13 on elliptic curves over , to prove the theorem we may suppose N to be different from N=2,3,5,7, 13; equivalently, that X_0(N) is of genus greater than 0 so the discussion above applies. In particular, we assume that E is an elliptic curve over , of potential good reduction away from p=2, and possessing a rational point of order N=11 or N ≥17, where N is a prime. Since it has such a rational point, the Néron model of E over ℤ contains a constant subgroup scheme 𝒵 isomorphic to /(N·). For p a prime, let E_p denote the fiber at the prime p of the Néron model of E, so E_p is a group (scheme) of finite order over the prime field 𝔽_p. Since the specialization of 𝒵 to E_p defines N distinct 𝔽_p-rational points of E_p it follows that N divides |E_p(𝔽_p)|. If p>2, since E is of potentially good reduction, in the terminology of the theorem of Kodaira and Néron (cf. Theorem 15.2 and Table 15.1 in Section 15 in Appendix C of <cit.>) we have that E_p is not of multiplicative type—i.e., of “type I_ν" or “I_ν^*" for any ν. So either: * p is a prime of good reduction for E, or * it is of additive reduction. If E has additive reduction at p (i.e., the Néron fiber at p is of one of the types I, II, III or I^*, II^*, III^*; see Table 15.1 in loc.cit.) then E_p is an extension of the additive group 𝔾_a over 𝔽_p by a finite group of order ≤ 4. In particular |E_p(𝔽_p)| is divisible by p and is ≤ 4p. It follows that Equation <ref>, applied to the prime p=3 already shows that E cannot have additive reduction at p=3 for the primes N we are considering, so it must have good reduction–i.e., be an elliptic curve— at p=3. But since any elliptic curve over 𝔽_3 has at most 7 𝔽_3-rational points, we see, by (<ref>) that N is either 2, 3, 5, or 7. A significantly more detailed outline of the proof of Conjecture 1 is given as Steps 1-4 on pages 132, 133 of <cit.>—the full proof itself being in the body of that paper. § REMARKABLE `DIOPHANTINE STABILITY' Let L/K be an extension of (number) fields, and V an algebraic variety defined over K. Denote by V(K) the set of K-rational points of V. Say that V is Diophantine Stable for L/K, or L/K is Diophantine Stable for V, if the inclusion V(K) ↪ V(L) is an isomorphism, i.e.: if V acquires no new rational points after passing from K to L. Note that Theorem <ref> tells us that: For all but finitely many positive numbers N, the curve Y_1(N) (over ) is Diophantine Stable for all quadratic extensions L/. This is striking and suggests that Diophantine Stability is a common feature.[ Filip Najman suggested that one might add a comment that the Diophantine Stability phenomenon of Corollary <ref> holds more generally over number fields of any degree, given the results referred to in Remark <ref> in Subsection <ref> above.] Consider: Suppose A is a simple abelian variety over K and all K-endomorphisms of A are defined over K. Then there is a set 𝒮 of rational primes with positive density such that for every ℓ∈𝒮 and every n ≥ 1, there are infinitely many cyclic extensions L/K of degree ℓ^n such that A(L) = A(K). If A is an elliptic curve without complex multiplication, then 𝒮 can be taken to contain all but finitely many rational primes. and this is surely not the last word regarding the extent of Diophantine Stability, specifically if the base field K is and if A=E, an elliptic curve over . We conjecture that any such E is Diophantine stable for all but finitely many Galois extensions of prime degree greater than 5. § CYCLIC ISOGENIES DEFINED OVER QUADRATIC FIELDS So, what about uniformity results regarding cyclic N-isogenies of elliptic curves ranging over all quadratic fields? This question has been addressed in <cit.> and generalized to arbitrary number fields in <cit.>. § `EXPECTED' AND `UNEXPECTED' L-RATIONAL CYCLIC ISOGENIES FOR L RANGING THROUGH QUADRATIC FIELDS A corollary of a theorem of Faltings[For a discussion of this in the context of generalization(s) of the classical Mordell Conjecture—with references listing the people who also worked on this, see Thoughts about Mordell and uniformity of finiteness bounds: <https://people.math.harvard.edu/ mazur/papers/M.pdf>] is that: (Faltings) Let K be a number field and X a curve defined over K. Then X is Diophantine Stable for all but finitely many quadratic extensions L/K unless X is—of genus 0 or 1, or—hyperelliptic or bielliptic (over K). And, for a hyperelliptic and/or bielliptic curve X defined over K, Faltings proves that there are only finitely many quadratic points (relative to K) that are not parametrized by an infinite system of quadratic points arising by X being the double cover of a rational curve Y with a K-rational point; or an elliptic curve of Mordell-Weil rank greater than 0 over K: 90pt π^-1(Y(K))[d]^π[r]^⊂ X(K̅)[d]^π Y(K)[r]^⊂ Y(K̅). 10pt10pt < g r a p h i c s > §.§ Isolated quadratic points Call the set of quadratic points of X that are not among such (infinite) systems of parametrized quadratic points isolated points. The infinite systems deserve to be called `expected quadratic points (over K) in X' given the geometry of the situation. But when X=X_0(N) for some N and K = there may also be a few other points of X_0(N) over quadratic imaginary fields (√(d)) of class number 1; i.e., d = -1, -2, -3, -7, -11, -19, -43, -67, -163 that deserve the title “expected.” Namely, if E is an elliptic curve over that is CM with CM field K (√(d)) (with d in the above list) then for any positive integer N with the property that all of its prime divisors are (unramified and) split in K, E has a K-rational cyclic isogeny of degree N; hence is classified by a K-rational point of X_0(N). Such a point is therefore also “expected.” So: §.§ Sporadic quadratic points Call a quadratic point of X_0(N) sporadic (quadratic) if: * it is not a cusp, and * is isolated; i.e., * is not the inverse image of a -rational point in ℙ^1 via a hyperelliptic covering (i.e., a degree 2 mapping X_0(N)→ℙ^1), in the case where X_0(N) is hyperelliptic, and * is not the inverse image of a -rational point in an elliptic curve E via a bielliptic covering (i.e., a degree 2 mapping X_0(N)→ E), in the case where X_0(N) is bielliptic,and * is not a point of X_0(N) classifying a CM elliptic curve and cyclic isogeny of degree N as described above. Ranging over all X_0(N)'s for N ∈ℤ_≥ 1 there are only finitely many sporadic quadratic points. Surely all of us agree with the spirit of the quotation of Andrew's view regarding rational torsion in Section <ref>. That is, we're interested “in knowing when this [sporadic quadratic points] sort of thing is going on, and in putting a stop to it if at all possible." Thanks to the recent work of a number of people, the sporadic quadratic points of all of the curves X_0(N) that are hyperelliptic or bielliptic have been computed, as we will discuss in the next section. Sheldon Kamienny made the following comment: The existence of sporadic points always left me scratching my head. Do they fit into a framework, or is it just nature being unkind? § HYPERELLIPTIC X_0(N) A classical theorem of Ogg <cit.> gives the nineteen values of N for which X_0(N) is hyperelliptic (we take hyperelliptic to require that the genus is >1):10pt [ N: 22 23 26 28 29 30 31 33 35 37; genus: 2 2 2 2 2 3 2 3 3 2 ] 10pt [ N: 39 40 41 46 47 48 50 59 71; genus: 3 3 3 5 4 3 2 5 6 ] 10pt The levels N that appear in boldface above are those values of N such that X_0(N) is bielliptic as well as hyperelliptic. All sporadic quadratic points for any of those modular curves X_0(N) (except for X_0(37)) have been computed by Peter Bruin and Filip Najman in their article <cit.> (which has other interesting results as well). The case of X_0(37) is taken care of in Josha Box's paper <cit.>, in which all sporadic quadratic points have also been computed for the curves X_0(N) with N=43, 53, 61, 65, these being bielliptic curves covering elliptic curves of positive Mordell-Weil rank. These are the values of N for which X_0(N) is of genus >1 and bielliptic (over ): 10pt [ 22 26 28 30 33 34 35 37 38; 39 40 42 43 44 45 48 50 51; 53 54 55 56 60 61 62 63 64; 65 69 72 75 79 81 83 89 92; 94 95 101 119 131 ] 10pt Until very recently there remained a dozen entries in the above table for which we did not know the set of their isolated quadratic points. Thanks to Filip Najman and Borna Vukorepa <cit.> we now have computation of the isolated quadratic points for all bielliptic curves X_0(N) (as we also do for all hyperelliptic X_0(N)). § EXCEPTIONAL QUADRATIC POINTS Let N be prime, and w_N X_0(N)→ X_0(N) the Atkin-Lehner involution. This involution is given by sending a pair (representing a point in X_0(N)) (E, C_Nα↪ E) —consisting of an elliptic curve E and C_N a cyclic subgroup of order N— to the pair (E', C'_Nα'↪ E'). Here E' E/C_N and C'_N E[N]/C_N (where E[N] is the kernel of multiplication by N in E). Forming the quotient, X_0(N)^+ X_0(N)/ action of w_N we get the double cover X_0(N) π⟶ X_0(N)^+ For N an integer where X_0(N)^+ of genus >1, * call a -rational point of X_0(N)^+ exceptional if it is neither a cusp nor a point classifying a CM elliptic curve; * call a quadratic point P of X_0(N) exceptional if it is not defined over (i.e., it is an honest quadratic point) and the image of P in X_0(N)^+ is an exceptional -rational point. Exceptional points deserve the adjective, since they have the intriguing structure of a duo of cyclic N-isogenies: EN↔ E' and E'N↔ E. This structure can also be combined into a single abelian surface defined over : A E× E' endowed with an endomorphism: “√(N)": (x,y)↦(α'(y), α(x)). What tools do we have to compute the exceptional -rational points on X_0^+(N)? The classical method of Chabauty-Coleman (see Section <ref>) computes a usable bound for the number of rational points on a curve X (of genus >1) provided that the rank r of the Mordell-Weil group of the Jacobian of X is strictly less than its genus g. But the Birch and Swinnerton-Dyer conjecture predicts that (for N prime) the rank r_0(N)^+ of J_0(N)^+(), the Mordell-Weil group of the Jacobian of X_0(N)^+, is greater than or equal to g_0(N)^+, the genus of X_0(N)^+. So this classical method can't be brought to bear here. Computationally, we have many examples where there's actual equality: r_0(N)^+=g_0(N)^+. (Indeed, this is true for all N < 5077 for which g_0(N)^+ > 1.) Happily, for exactly such cases—i.e., for curves X of genus >1 with r=g— we have the more recent “Quadratic Chabauty–Coleman–Kim" method that offers a new approach to compute the set of all ℚ-rational points[ We think it is reasonable to conjecture that the average value of the ratios r_0(N)^+/g_0(N)^+ is 1; e.g., as N ranges through prime values; are these ratios bounded?]. For example see <cit.> (and Section <ref>). Indeed, there are also two new viewpoints on quadratic Chabauty, the geometric perspective of Edixhoven–Lido <cit.> and the (p-adic) Arakelov theoretic one of Besser–Müller–Srinivasan <cit.>. We will say more about these in the following section. The list of curves X_0(N)^+ of genus 2 or 3 with N prime is a result of Ogg. We have the following: Theorem (Ogg) For N prime, X_0(N)^+ is of genus 2 if and only if N ∈{67, 73, 103, 107, 167, 191} and it has genus 3 if and only if N ∈{97, 109, 113, 127, 139, 149, 151, 179, 239}. Elkies and Galbraith <cit.> found exceptional rational points on X_0^+(N) for N = 73, 91, 103, 191 and N = 137, 311 (which are of genus 4). In <cit.>, it was shown that the only prime values of N with X_0^+(N) of genus 2 or 3 that have an exceptional rational point are N = 73, 103, 191 (all genus 2). In particular, for prime N, if X_0^+(N) is of genus 3, it has no exceptional rational points. Adžaga, Arul, Beneish, Chen, Chidambaram, Keller, and Wen <cit.> showed that the only prime values of N with X_0^+(N) of genus 4, 5, or 6 that have an exceptional rational point are N = 137 and 311. Thus for all of the above values of N, we have a complete understanding of the exceptional quadratic points on X_0(N). We briefly discuss the work of <cit.> on the genus 4 curve X_0(311)^+. Using the canonical embedding, a model for X_0(311)^+ is given by the following equations in ℙ^3: X^2 + W Y - 2 X Y + 2 Y^2 + 7 X Z - 8 Y Z + 13 Z^2 = 0, W X^2 - 2 W X Y + X^2 Y - W Y^2 - X Y^2 - 2 Y^3 + W^2 Z + 6 W X Z - X^2 Z - W Y Z + 5 X Y Z + 4 Y^2 Z + 7 W Z^2 - 4 X Z^2 - 2 Z^3 = 0. Using quadratic Chabauty (see Section <ref>) at p=5 on a plane model, they show that there are precisely five rational points on the curve: rational point on X_0(311)^+ type of point (1 0 0 0) cusp (1 -1 -1 0) CM, D = -11 (1 2 -1 -1) CM, D=-19 (2 0 -1 0) CM, D=-43 (6 8 -1 -2) exceptional Galbraith <cit.> had earlier computed that the j-invariant of the ℚ-curve corresponding to the exceptional point is j = 31244183594433270730990985793058589729152601677824000000 ± 1565810538998051715397339689492195035077551267840000√(39816211853). See also the survey article <cit.> (and <cit.>) in which exceptional points found by Elkies and Galbraith are defined and studied in the context of ℚ-curves; and for the list of the seven known exceptional N-isogenies, these being rational over a quadratic field of discriminant Δ: [ N g Δ; 73 2 -127; 91 2 -3 · 29; 103 2 5· 557; 125 2 509; 137 4 -31159; 191 2 61· 229· 145757; 311 4 11· 17· 9011· 23629; ] By the work of <cit.> this gives a complete list of exceptional isogenies arising from rational points on the curves X_0(N)^+ of level N and genus at most 6. Are these the only exceptional isogenies? There's lots to be done. § THE METHOD OF CHABAUTY–COLEMAN–KIM AND `QUADRATIC CHABAUTY' §.§ The classical method of Chabauty The aim of this `classical' method is to prove finiteness of the set of -rational points of a curve X of genus g>1 under the assumption that the rank r of the Mordell–Weil group of the Jacobian J of X is small; specifically, if it is strictly less than g. One can assume that X has at least one -rational point, for otherwise the job is done. Choosing a rational point b ∈ X(), form the Abel-Jacobi embedding i_b: X → J P ↦ [(P) - (b)]. For any prime p viewing J(_p) as a p-adic analytic group (of dimension g) containing the Mordell-Weil group J() as subgroup, denote by Γ_p ⊂ J(_p) the topological closure of J() in J(_p) noting that, given our hypothesis, its dimension is less than r. We have:120ptX()[r]^i_b[d]^⊂ Γ_p[d]^⊂ X(_p)[r]^i_b J(_p)X(_p) is a (proper) p-adic analytic subvariety of J(_p) that generates J(_p) as a p-adic analytic group. It follows that X(_p) is not contained in the proper subgroup Γ_p and therefore X(_p)∩Γ_p is finite; hence so is X(). How can one make this method effective? §.§ The method of Chabauty as augmented by Coleman The Chabauty–Coleman method <cit.> is one of our most practical tools for actually computing the finite set of rational points on a curve X of genus greater than 1 defined over the rationals, subject to the same Chabauty condition; namely that the Mordell-Weil rank r of the Jacobian of the curve is strictly less than its genus.. Robert Coleman constructed, in the above conditions, a p-adic analytic function ϕ on the p-adic analytic variety X^an_/_p such that the zeroes of ϕ on X(_p) are * reasonably computable (to any approximation), * finite in number, and * include X(). The construction of such a ϕ uses Coleman's p-adic abelian integrals on the Jacobian of the curve. Let X be a curve (of genus g>1) defined over the rationals and let J be its Jacobian. Now fix a prime p of good reduction for X and a rational point b ∈ X(). Consider, as before, the Abel-Jacobi embedding i_b: X → J given by P ↦ [(P) - (b)]. Coleman <cit.> proved that there is a p-adic line integral on holomorphic differentials on the curve satisfying several nice properties (linearity in the integrand, additivity in endpoints, pullbacks under rigid analytic maps, Galois compatibility). The map J(_p) × H^0(X__p, Ω^1) →_p (Q, ω) ↦⟨ Q, ω⟩ is additive in Q, is _p-linear in ω and is given by ⟨ Q, ω⟩ = ⟨ [D], ω⟩∫_D ω for D ∈^0(X) with Q = [D]. Then ⟨ i_b(P), ω⟩ = ∫_b^P ω. The embedding i_b induces an isomorphism of g-dimensional vector spaces H^0(J__p, Ω^1) ≃ H^0(X__p, Ω^1), giving us the pairing J(_p) × H^0(J__p, Ω^1) →_p (Q, ω_J) ↦∫_0^Q ω_J. This gives a homomorphism log: J(_p) → H^0(J__p, Ω^1)^*, where log is the logarithm on the p-adic Lie group J(_p), and we have the following diagram 8.5cm![->,>=stealth',baseline=(current bounding box.center)] [] (X) X(); [right of=X, node distance=3.7cm] (Xp) X(_p); [below of=X, node distance=1.5cm] (Hf) J(); [right of=Hf,node distance=3.7cm] (Hfp) J(_p); [right of=Hfp,node distance=3.5cm](Dieu) H^0(J__p,Ω^1)^∗≃ H^0(X__p,Ω^1)^∗; (X) edge node[left] (Hf); (Xp) edge node[left] (Hfp); (X) edge (Xp); (Hf) edge node[above] (Hfp); (Hfp) edge node[above]log(Dieu); (Xp) edge node[above right] (Dieu); Recall that under the hypothesis r < g, the intersection X(_p)_1 X(_p) ∩Γ_p and, consequently, X() is finite. Coleman gave a technique to compute X(_p)_1, by his construction of p-adic integrals that vanish on Γ_p: in particular, considering an integral of an annihilating differential ω, a holomorphic differential such that ⟨ P, ω⟩ = 0 for all P ∈ J(), then computing the zero locus of this integral on X(_p). Bounding the number of zeros of this integral via fairly elementary p-adic analysis (for good p > 2g) yields the bound #X() ≤#X(_p)_1 ≤#X(_p) + 2g-2. In Section <ref>, we give a worked example of the Chabauty–Coleman method. §.§ The method of Chabauty–Coleman–Kim The construction above crucially uses an assumption that the rank of the Jacobian is small relative to the genus. Nevertheless, there are many interesting curves where this hypothesis is not satisfied, including a number of modular curves we have already seen. In a series of papers <cit.>, Minhyong Kim laid out a program to extend Chabauty–Coleman relaxing the condition on Mordell–Weil rank, going beyond the abelian confines of the Jacobian, replacing it by a sequence of Selmer varieties, which are carved out of unipotent quotients of π_1^(X_)__p, the _p-étale fundamental group of X_ with base point b. We first recast the Chabauty–Coleman method (see also <cit.>, <cit.>) using p-adic Hodge theory, which adds an extra row of compatibilities to diagram (<ref>). Let V = ^1_(X_, _p)^* and V_^1_(X__p)^*, viewed as a filtered vector space with filtration dual to the Hodge filtration. We have an isomorphism V_/F^0 ≃^0(X__p, Ω^1)^*. Let G_T be the maximal quotient of G_ unramified outside T, the set of primes of bad reduction of X, together with the prime p. Let G_p denote the absolute Galois group of _p. Then the étale formulation of Chabauty–Coleman is given by the following diagram, where the last row is of Bloch–Kato Selmer groups: 8.5cm![->,>=stealth',baseline=(current bounding box.center)] [] (X) X(); [right of=X, node distance=3.7cm] (Xp) X(_p); [below of=X, node distance=1.5cm] (Hf) J(); [right of=Hf,node distance=3.7cm] (Hfp) J(_p); [right of=Hfp,node distance=3.5cm](Dieu) ^0(X__p,Ω^1)^∗; [below of=Hf, node distance=1.5cm] (H1f) ^1_f(G_T,V); [right of=H1f,node distance = 3.7cm] (H1fp)_f^1(G_p,V); [right of=H1fp, node distance=3.5cm](H1dR) _1^(X__p)/F^0; (X) edge node[left] (Hf); (Xp) edge node[left] (Hfp); (X) edge (Xp); (Hf) edge node[above] (Hfp); (Hfp) edge node[above]log(Dieu); (Xp) edge node[above right]i_b (Dieu); (Hf) edge node[above] (H1f); (Hfp) edge node[above] (H1fp); (Dieu) edge node[right] ≃ (H1dR); (H1f) edge node[right] (H1fp); (H1fp) edge node[above] ≃ (H1dR); . Now let U be a Galois-stable unipotent quotient of π_1^(X_)__p. Kim defined global and local unipotent Kummer maps j_U and j_U_v such that the following diagram commutes: [->,>=stealth',baseline=(current bounding box.center)] [] (X) X(); [right of=X, node distance=4.7cm] (Xv) ∏_v ∈ TX(_v); [below of=X, node distance=1.5cm] (Hf) ^1(G_T,U); [right of=Hf,node distance=4.7cm] (Hfv) ∏_v ∈ T^1(G_v,U).; (X) edge node[left]j_U (Hf); (Xv) edge node[right]∏ j_U,v (Hfv); (X) edge (Xv); (Hf) edge node[above]∏loc_v (Hfv); Kim proved that the nonabelian pointed cohomology sets ^1(G_T, U) and ^1(G_v, U) are affine algebraic varieties over _p. Motivated by the classical study of Selmer groups, he then refined ^1(G_T, U) by local conditions to produce a Selmer variety. We give an adapted version <cit.> of the definition here: The Selmer variety (U) is the reduced scheme associated to the subscheme of ^1(G_T, U) containing those classes c such that * _p(c) is crystalline, * _ℓ(c) ∈ j_U,ℓ(X(_ℓ)) for all ℓ p, * the projection of c to ^1(G_T,V) comes from an element of J()⊗_p. Now the Selmer variety gives rise to the following interesting set of points X(_p)_U j_p^-1(_p ((U))) ⊂ X(_p ). We have that X()⊂ X(_p)_U. Suppose that U is a Galois-stable quotient of U_n, the maximal n-unipotent quotient of π_1^(X_, b)__p. Then X() ⊂ X(_p)_n X(_p)_U_n⊂ X(_p)_U . The depth-n Selmer set X(_p)_n can be computed in terms of n-fold iterated Coleman integrals, and one has a series of refinements X() ⊂⋯⊂ X(_p)_n ⊂ X(_p)_n-1⊂⋯⊂ X(_p)_2 ⊂ X(_p)_1. Note that the depth-1 Selmer set is the Chabauty–Coleman set from before. We refer to the points in X(_p)_n as the set of Selmer points of level n. We call the points in X(_p)_n ∖ X() the set of mock-rational Selmer points of level n. Kim has conjectured that for n ≫ 0, the set X(_p)_n is finite. This conjecture is implied by the conjecture of Bloch–Kato. Putting everything together, Kim's program is to study finiteness of X(_p)_U using p-adic Hodge theory and the following diagram is the nonabelian generalization of (<ref>): [->,>=stealth',baseline=(current bounding box.center)] (m) [matrix of math nodes, row sep=3em, column sep=4em, minimum width=2em] X() X(_p) (U) _f^1(G_p,U) U^/Fil^0 ; [-stealth] (m-1-1) edge (m-1-2) edge node [left] j_U (m-2-1) (m-1-2) edge node [left] j_U,p (m-2-2) edge node [above,right] j_U^ (m-2-3) (m-2-1) edge node [above] loc_U,p (m-2-2) (m-2-2) edge node [above] ≃ (m-2-3); Computing the depth-2 Selmer set (or a slightly larger finite set containing it), known as quadratic Chabauty, has seen progress in recent years <cit.>, via aspects of the theory of p-adic height functions <cit.>. The quadratic Chabauty set X(_p)_2 is finite for those curves that satisfy the rank bound <cit.> r < g + ρ - 1, where ρ((J)) is the Néron–Severi rank of the Jacobian over . To carry out the quadratic Chabauty method, one uses a nontrivial element of ((J) →(X)) to construct a nonabelian quotient U of U_2, which is used to compute X(_p)_U. Siksek <cit.> showed that modular curves of genus 3 or more have ρ at least 2, and consequently, for these curves, quadratic Chabauty allows one to consider Jacobians of higher rank than allowed by Chabauty–Coleman. Balakrishnan, Dogra, Müller, Tuitman, and Vonk <cit.> made various aspects of quadratic Chabauty computationally practical, using explicit p-adic cohomology to compute a certain (global) p-adic height of Nekovář, depending on a choice of a nontrivial element of ((J) →(X)). Roughly speaking, the method starts from the following observation: the global p-adic height admits a decomposition as a sum of local heights: a local height at p that can be computed using p-adic Hodge theory, and a finite sum of local heights away from p that, in certain favorable conditions, can be shown to be trivial—or if not trivial, at least a quantity that can be computed from the geometry of a regular model of the curve. Moreover, the global p-adic height is a quadratic form on H^0(X, Ω^1)^*. Choosing an explicit basis for the space of quadratic forms in terms of Coleman integrals, and knowing sufficiently many rational points (either on X or on J) and their p-adic heights, one can compute a locally analytic function whose zero locus contains X(_p)_2. Recently, two new perspectives on quadratic Chabauty have emerged: the geometric one of Edixhoven–Lido <cit.> and the p-adic Arakelov theoretic one of Besser–Müller–Srinivasan <cit.>. In geometric quadratic Chabauty, Edixhoven and Lido <cit.> use line bundles over the Jacobian, the Poincaré torsor (a biextension of the Jacobian by 𝔾_m), and models over the integers to study rational points under the same rank bound hypothesis. Besser, Müller, and Srinivasan <cit.> give a new construction of p-adic heights on varieties over number fields using p-adic adelic metrics on line bundles, in the spirit of Zhang's work on real-valued heights using adelic metrics <cit.>. This leads them to formulate p-adic Arakelov quadratic Chabauty. § RATIONAL POINTS ON X_0(37): THREE PERSPECTIVES As a concrete application of the techniques discussed so far, we present here three perspectives on rational points on the modular curve X_0(37). For further discussion, see Section 5 of <cit.>; and for more, see Section 5 of <cit.>. The modular curve X X_0(37) is of genus 2 and therefore is hyperelliptic. Denote by X_0(37)σ⟶ X_0(37) its hyperelliptic involution, and by X_0(37)w⟶ X_0(37) its Atkin-Lehner involution. The involutions σ and w commute, generating a Klein group 𝒢 of automorphisms. The automorphisms 1, w, σ, wσ are defined over and are the only automorphisms of X_0(37) over . Form the quotients X_0(37)[dl]_i_0[d]^x[dr]^i_1 E_0 X_0(37)/⟨σ· w⟩ ℙ^1_/≃ X_0(37)/⟨σ⟩ E_1 X_0(37)/ ⟨ w ⟩. By the Riemann–Hurwitz formula, the ramification locus of each of the double covers: X_0(37) i_0⟶E_0 and X_0(37) i_1⟶ E_1 are -rational (effective) divisors of degree two: * D_0 {η_0,η̅_0}⊂ X_0(37)—for (<ref>). * D_1 {η_1,η̅_1}⊂ X_0(37)—for (<ref>). In particular, D_1 is the fixed point set of w and D_0 is the fixed point set of wσ. Note that (since σ commutes with w) each of these involutions (σ, w, wσ) preserves D_1 and D_0. The involution wσ interchanges the points η_1,η̅_1. So their image e_0∈ E_0—which is therefore the image of a -rational divisor in X_0(37)— is -rational. Consequently {η_1,η̅_1} either consists of a pair of -rational points[ that's not the case; see Lemma <ref> below] or a conjugate pair of quadratic points in X_0(37). For the same reason the involution w preserves the ramification divisor of wσ and interchanges the points η_0,η̅_0 and therefore their image e_1∈ E_1 is -rational. A visit to the L-Functions and Modular Forms Database (LMFDB) <http://www.lmfdb.org/EllipticCurve/Q/> (with a bit of work) will get you that: * E_0 is the elliptic curve https://www.lmfdb.org/EllipticCurve/Q/37/b/237.b2: y^2+y=x^3+x^2-23x-50. Its Mordell-Weil group is of order 3. * E_1 is the elliptic curve https://www.lmfdb.org/EllipticCurve/Q/37/a/137.a1: y^2+y=x^3-x. It has Mordell-Weil rank 1, and its group of -rational points is isomorphic to . * (Classical Chabauty gives finiteness) Let J_0(37) denote the Jacobian of X_0(37). We have:80pt X_0(37)[r]^⊂[dr]^i_0× i_1 J_0(37)[d]^ϕ E_0× E_1 where i_0,i_1 are (as above) the modular parametrization of E_0,E_1, and ϕ: J_0(37) E_0× E_1 is an isogeny. Since {E_0× E_1}() is—by the data above—a group isomorphic to ×/3 (contained in a cyclic group of order three times the elliptic curve E_1) we see that the Zariski closure of the group of -rational points J_0(37)() is an algebraic subgroup in J_0(37) of codimension 1 so can intersect only finitely with X_0(37)—giving that X_0(37)() is finite. * (The projection to E_0 gets us the precise set of -rational points)The cusp ∞∈ X_0(37) is a -rational point, as are the four points 𝒮=𝒢·∞={∞, w(∞)= the cusp 0, σ(∞), σ( 0) }. These are the only four -rational points on X_0(37). Returning to the mapping of degree two X_0(37)() i_0⟶ E_0(), since E_0() is cyclic of order three, we see that * the pair {∞, σ w(∞)=σ( 0)} maps to the origin in E_0() and * the pair { 0, σ w( 0)= σ(∞)} maps to a (nonzero) point e∈ E_0(). * Recalling that the pair {η_0,η̅_0} discussed above maps to e_0∈ E_0() and noting that e_0 cannot be any of the above two -rational points of E_0, it must be the third -rational point, giving us: e_0 = 2e=-e ∈ E_0(). The inverse image of e_0 =2e=-e in X_0(37) consists of a pair of (√(37))-conjugate points {η, η̅}∈ X_0(37)((√(37))). X_0(37)() = 𝒮.Proof of Lemma <ref> The involutions σ and w of X_0(37) are easily described in terms of the model of X_0(37) given by Equation (<ref>): y^2 = -x^6 -9x^4 - 11x^2 + 37. We have: * (x,y) σ↦ (x,-y), * (x,y) w↦ (-x,y) and * (x,y) wσ↦ (-x,-y). The proof of (c) follows from (a) and (b) by composition. The proof of (a) is simply that the quotient of the involution (x,y) ↦ (x,-y) is of genus zero as is clear from the equation; so that involution is the hyperelliptic involution σ. The proof of (b) follows from considering the following model (over ) for the expression of X_0(37) as the double cover X_0(37) i_1⟶ E_1 = X_0(37)^+ over : X_0(37):[d]^i_1 y^2 = -x^6 -9x^4 - 11x^2 + 37 E_1: v^2 = -u^3 -9u^2 - 11u + 37[u]^u=x^2; v=y Since {η_1,η̅_1} consists of the fixed points of the involution w, we have: {η_1,η̅_1} = { (0, ±√(37))}, from which it follows that wσ:(0, ±√(37)) ↦ (0, ∓√(37)) and therefore i_0(η_1)=i_0(η̅_1) = i_0(0, +√(37)) = i_0(0, -√(37)) ∈ E_0(), i.e., it is a -rational point of E_0, which can be neither i_0(∞) nor i_0( 0) so must be the third -rational point. * (The classical Chabauty method would also give us the set of -rational points)Fix the model y^2 = g(x) -x^6 -9x^4 - 11x^2 + 37 for X over . Since J_0(37)() has Mordell–Weil rank 1, which is less than the genus of the curve, we may carry out the Chabauty–Coleman method to compute the set X(). We use the prime p = 3. Searching in a box for rational points of small height, one finds the points (± 1, ± 4) ∈ X(). The point P [(1,-4) - (-1,4)] ∈ J_0(37)() is non-torsion, since the 3-adic Coleman integral of a holomorphic differential along this point is nonzero: ∫_(-1,4)^(1,-4)x dx/y = 3^2 + 2 · 3^3 + 3^4 + 2 · 3^5 + 3^7 + O(3^9). Moreover, ∫_(-1,4)^(1,-4)dx/y = O(3^9). Thus we may take dx/y as our annihilating differential. The curve X over _3 has the following rational points: (0,1), (0,2), (1,1), (1,2), (2,1), (2,2) ∈ X__3(_3), which correspond to the residue disks over which we carry out our computation. Fixing as our basepoint (-1,4) ∈ X(), we start in the residue disk corresponding to (0,1). We take the following point in the residue disk S_0 = (0, 1 + 2· 3^2 + 3^4 + 2 · 3^5 + 3^7 + 2 · 3^8 + 2 · 3^9 + O(3^10)), at which we compute our local coordinate, producing S_t = (t, -3788 + (2159 + O(3^10))t^2 - (15737 + O(3^10))t^4 + - (23833 + O(3^10))t^6 + (746· 3^3 + O(3^10))t^8 + O(t^10)) =: (x(t),y(t)). We wish to compute the zeros of the power series I(3T), where I(T) = ∫_(-1,4)^S_0dx/y + ∫_S_0^S_Tdx(t) dt/y(t). We find I(3T) = (3 + 3^3 + 2 · 3^4 + 2 · 3^5 + 3^6 + 3^7 + 2 · 3^8 + 3^9 + 3^10 + O(3^11))T + (3^2 + 2 · 3^4 + 2 · 3^5 + 3^7 + 2 · 3^8 + 2 · 3^9 + 3^10 + O(3^12))T^3 + (3^6 + 3^7 + 2 · 3^8 + 3^9 + 3^10 + 3^11 + 2 · 3^13 + 2 · 3^14 + O(3^15))T^5 + (3^8 + 2 · 3^9 + 3^10 + 2 · 3^11 + 2 · 3^12 + 2 · 3^13 + 2 · 3^15 + O(3^17))T^7 + (3^7 + 2 · 3^8 + 2 · 3^10 + 2 · 3^11 + 3^12 + 3^14 + 2 · 3^16 + O(3^17))T^9 + O(T^10), which has precisely one zero at T = 0, corresponding to S_0, which we can identify, after fixing a choice of √(37)∈_3, as (0, √(37)). Continuing in this way, parametrizing each residue disk by a local coordinate and computing the zeros of the corresponding I(3T) in each residue disk, we find that X(_3)_1 = {(0, ±√(37)), (± 1, ± 4)}, from which we immediately produce X() = (± 1, ± 4). It was fairly lucky that X(_3)_1 = {(0, ±√(37)), (± 1, ± 4)} and was not much larger. Finding a small good prime p such that there are no mock-rational Selmer points—or where the mock-rational points are easily-recognized algebraic points—may be an issue. By the Weil bound, we know that #X(_p) grows linearly as p grows. So if we had used a larger prime p in the classical Chabauty–Coleman method, we would expect more p-adic points in X(_p)_1, and we may not be able to immediately recognize these extra points. 30pt § QUADRATIC POINTS ON BIELLIPTIC CURVES OF GENUS 2 USING QUADRATIC CHABAUTY In the previous section, we considered the problem of determining the finitely many rational points on X_0(37). We could also study the finite sets X_0(37)(K) for various other number fields K, one number field at a time. Or, we could further study ^d(X_0(37))(), as described in Section <ref>, which would tell us about all degree d points on X_0(37). We start by considering X_0(37)(K) for a fixed quadratic field K. If the rank of J_0(37)(K) is now 2, and if this is because the rank of E_0(K) increases to 1—recall from Section <ref> that the rank of E_0() is 0—then the Chabauty–Coleman method no longer applies. However, since X_0(37) is bielliptic and genus 2, we can use the method of <cit.>, which gives a particularly explicit description of quadratic Chabauty functions using p-adic height functions and double Coleman integrals on elliptic curves, for bielliptic genus 2 curves. We describe this below in some generality, and then use it to study rational points on X_0(37) over K=(i). Let K = or a quadratic imaginary extension, and let X/K be a genus 2 bielliptic curve y^2 = x^6 + a_4x^4 + a_2x^2 + a_0, with a_i ∈ K. Let C_1 and C_2 be the elliptic curves over K defined by the equations C_1: y^2 = x^3 + a_4 x^2 + a_2 x + a_0, C_2: y^2 = x^3 + a_2x^2 + a_4a_0 x + a_0^2, and let f_1: X → C_1 be the map that sends (x,y) to (x^2,y) and f_2: X → C_2 be the map that sends (x,y) → (a_0 x^-2, a_0 yx^-3). We will be considering the case where the Mordell-Weil ranks of C_1 and C_2 over K are equal to 1. Letting J denote the jacobian of X we have that the rank of J over K is 2. The natural mapping defined over K ^2(X) → J (i.e., setting p=2 in Equation <ref> in Section <ref>) is * an isomorphism if X is not hyperelliptic, or is * an isomorphism in the complement of an `exceptional fiber' ℰ⊂^2(X) isomorphic to ℙ^1 over K if X is hyperelliptic. The rank two group J(K) and—if X is hyperelliptic J(K) together with ℰ≃ℙ^1(K) `parametrize' —in the appropriate sense—all quadratic points of X over K.These parametrization are neat, and explicit, but they still leave untouched the question: for a given quadratic field K what—exactly—is the finite set X(K)? We want to use quadratic Chabauty to answer such questions. Fix some auxiliary choices, including an idèle class character χ: G_K^ab→_p. When K = fix a prime = (p) to be a prime of good ordinary reduction. When K is imaginary quadratic, take p to be a rational prime that splits as where both and are primes of good ordinary reduction. Let h_C_1 and h_C_2 denote the global -adic height functions associated to the choices made above and h_C_i, the respective local height at , with the global height written as the sum of local heights h_C_i = ∑_v h_C_i, v. Suppose C_1(K) and C_2(K) each have Mordell–Weil rank 1, and let P_i ∈ C_i(K) be points of infinite order. Let α_i = h_C_i(P_i)/[K:]log_C_i(P_i)^2. Let Ω denote the finite set of values taken by -∑_v∤ p (h_C_1, v(f_1(z_v)) - h_C_2,v(f_2(z_v)) - 2χ_v(x(z_v))), for (z_v) ∈∏_v∤ p X(K_v). Then X(K) is contained in the finite set of z ∈ X(K_) cut out by the quadratic Chabauty function h_C_1,(f_1(z)) - h_C_2, (f_2(z)) - 2χ_(x(z)) - α_1log_C_1(f_1(z))^2 + α_2log_C_2(f_2(z))^2 ∈Ω, where log_C_i(Q) = ∫_∞^Q dx/2y, the single Coleman integral we saw in the Chabauty–Coleman method (with ∞ denoting the point at infinity on the corresponding elliptic curve) and h_C_i,(z) is a double Coleman integral. Over K = (i), the elliptic curves https://www.lmfdb.org/EllipticCurve/Q/37/a/137.a1 and https://www.lmfdb.org/EllipticCurve/Q/37/b/237.b2 each have rank 1. The computation in <cit.>, applies quadratic Chabauty as described above at the primes p = 41, 73, 101 to produce, for each prime p, a finite superset of p-adic points containing X(K). This is then combined with another method, the Mordell–Weil sieve, to give X_0(37)(K)= {(± 2i, ± 1),(± 1, ± 4), ∞, 0}. §.§ Explicitly determining quadratic points Quadratic Chabauty for bielliptic curves over was subsequently refined by Bianchi <cit.> using p-adic sigma functions in place of double Coleman integrals. This was recently extended by Bianchi and Padurariu <cit.>, where an implementation was given to study rational points on all rank 2 genus 2 bielliptic curves in the LMFDB, including the Atkin–Lehner quotient curve X_0(166)^* X_0(166)/⟨ w_2, w_83⟩ (with LMFDB label https://www.lmfdb.org/Genus2Curve/Q/13778/a/27556/113778.a.27556.1), as well as the Shimura curve X_0(10,19)/⟨ w_190⟩. Using a slight extension of their work to K = (i), as done in <cit.>, one can use a smaller prime to carry out the computation of a finite set containing the depth 2 Selmer set for X_0(37). (Recall Definition <ref> in Section <ref>.) We carried out this computation for p=13 and recovered the points (± 2i, ± 1),(± 1, ± 4), and ∞, 0. But lurking within the set of depth 2 Selmer points, we also found the algebraic points (±√(-3), ± 4), these being initially observed 73-adically in <cit.>. We also found several other mock-rational Selmer points, such as (5 + 8 · 13 + 12 · 13^2 + 4 · 13^3 + 2 · 13^4 + 3 · 13^5 + O(13^6), 1 + 3 · 13 + 3 · 13^2 + 9 · 13^3 + 12 · 13^4 + 5 · 13^5 + O(13^6)). . See Banwait–Najman–Padurariu <cit.> for an extensive discussion—and for results—regarding quadratic points on X_0(N). In particular they show that X_0(37)(ℚ(√(d))) = X_0(37)() for d= -6846, -2289, 213, 834, 1545, 1885, 1923, 2517, 2847, 4569, 6537, 7131, 7302, 7319, 7635, 7890, 8383, 9563, 9903. We could continue by varying the quadratic fields K, and in principle, if the rank is not too large, apply Chabauty–Coleman, quadratic Chabauty or variations thereof—possibly combining with other Diophantine techniques—to determine the K-rational points on X_0(37). But eventually the ranks outpace our current collection of Diophantine tools. For instance, over K = (√(-139)), a computation reveals that the elliptic curve E_0 has rank 3, as does E_1, and so J_0(37)(K) here altogether has rank 6, making it a challenge for existing methods. Now indeed, since X_0(37) is hyperelliptic, it has infinitely many quadratic points. Nevertheless, one can describe all quadratic points on X_0(37), using the ^2 perspective and the maps to the various quotients of X_0(37) in the diagram (<ref>), as was done by Box <cit.>. The hyperelliptic covering map x: X_0(37) →ℙ^1 is one source of infinitely many rational points, and the rank 1 elliptic curve quotient E_1 is another source of infinitely many rational points. Finally, the elliptic curve quotient E_0 gives three rational points, and Box pieced together these three sources of rational points to describe ^2(X_0(37))(), as below. The x-map gives us all points {(x_i,√(g(x_i))), (x_i,-√(g(x_i)))}∈^2(X_0(37))(), where x_i ranges through all rational numbers. We can find P_1 ∈ X_0(37)((√(-3))) such that [P_1 + P_1 - ∞- 0] generates the free part of the Mordell–Weil group of J_0(37)(), and we have the points 𝒫_1,0{P_1, P_1} and 𝒫_0,1{∞, w( 0)}. Finally, for any (a,b) ∈×/3∖{(0,0)}, there is a point 𝒫_a,b∈^2(X_0(37)()) defined by the unique effective degree 2 divisor P such that P - ∞ - 0∼ a𝒫_1,0 + b𝒫_0,1 - (a+b)(∞ + 0) for any lift of b to . § THANKS This paper expands the 45-minute talk that B.M. gave at the conference at the IAS (Talks Celebrating the Ogg Professorship in Mathematics - October 13, 2022). We are grateful to Barinder Banwait, Francesca Bianchi, Maarten Derickx, Netan Dogra, Minhyong Kim, Steffen Müller, Filip Najman, Ken Ribet, and Preston Wake for their illuminating comments. Thanks also to Netan Dogra for providing the appendix on Bring's curve. Thanks to the organizers in the IAS for organizing and hosting the conference in Andrew Ogg's honor, and thanks to Andrew for inspiring all of us. The research for this paper was partially supported by NSF grant DMS-1945452, the Clare Boothe Luce Professorship (Henry Luce Foundation), Simons Foundation grant no. 550023 (J.S.B.), and NSF grant DMS-2152149 (B.M.). § QUADRATIC POINTS ON BRING'S CURVE, BY NETAN DOGRA We consider Bring's curve, the smooth projective genus 4 curve X in ℙ^4 given as the common zeros of the following system of equations: x_1 + x_2 + x_3 + x_4 + x_5 = 0 x_1^2 + x_2^2+ x_3^2 + x_4^2 + x_5^2 = 0 x_1^3 + x_2^3+ x_3^3 + x_4^3 + x_5^3 = 0. From the quadratic defining equation of Bring's curve, we see that X() = ∅, so we have that X() = ∅. However, considering the curve instead over K = (i), we see several K-rational points: for instance, all permutations of the coordinates of the points (1: ± i: -1: ∓ i: 0) are in X((i)). Could there possibly be more points? The only quadratic points on Bring's curve are over (i), and up to permutation of coordinates, they are (1: ± i: -1: ∓ i: 0). The automorphism group of X is the symmetric group S_5, given by permutation of the five coordinates. Using the action of S_5 on X, one can see that the Jacobian J of X is isogenous to E^4 <cit.>, where E is the rank zero elliptic curve with LMFDB label https://www.lmfdb.org/EllipticCurve/Q/50/a/350.a3. Since Bring's curve is not hyperelliptic, the map ^2(X) ↪^0(X) is injective, and since ^0 (X)(ℚ) is finite it follows that there are only finitely many quadratic points on Bring's curve. There is also a simple description of a map ^2 (X)→ E^4 with finite fibers. The quotient of Bring's curve by the involution swapping two coordinates is isomorphic to the curve E': x^3 +y^3 +1 + x^2 y +y^2 x + x^2 +y^2 +xy+x+y=0 by projecting the three non-permuted coordinates to ℙ^2. This is isomorphic to the elliptic curve E:y^2 +5x^3+5x^2+4 = 0 (LMFDB label https://www.lmfdb.org/EllipticCurve/Q/50/a/350.a3) via (x,y)↦( 2/1+2x+2y,4(y-x)/1+2x+2y) . We have E() = {∞, (-2,± 4)}. The S_3-action on E' corresponds to the action of E( ) and -1 on E. Now fix a quadratic point P=(x_0 :x_1 :x_2 :x_3 :1 ) on Bring's curve. Up to an S_3 permutation, we may assume it maps to ∞ in E after quotienting by the involution switching x_0 and x_1. Suppose σ generates the Galois group of the field of definition of P. Let x=x_2 and y=x_3. Then y-x/1+2x+2y=-σ y-σ x/1+2σ x+2σ y. This reduces to the equation y+4 y = x +4 x. Thus quadratic points on Bring's curve are 5-tuples (x_1 :x_2 :x_3 :x_4 :x_5 ) of quadratic points in ℙ^5 satisfying, for all i_1 ,i_2 ,i_3 ⊂{1 ,2,3,4,5}, ∏ _σ∈ S_3 ( (x_i_σ (1)/x_i_σ (3))+4 (x_i_σ (1)/x_i_σ (3))- (x_i_σ (2)/x_i_σ (3))-4 (x_i_σ (2)/x_i_σ (3)))=0. Up to the S_5-action, we may reduce to finding tuples (x_1 ,x_2 ,x_3 ,x_4 ) defining a quadratic point (x_1 :x_2 :x_3 :x_4 :1) on Bring's curve and satisfying x_1+4 x_1 = x_2 +4 x_2 . and either x_1+4 x_1 = x_3 +4 x_3 , (1/x_1) +4(1/x_1) =(x_3/x_1) +4(x_3/x_1), or (1/x_3) +4(1/x_3) =(x_1/x_3) +4(x_1/x_3). Writing each quadratic point x_i = u_i + w_i, where u_i and w_i are in plus and minus eigenspaces for the Galois involution, these equations define a finite scheme over , and one may check that its rational points correspond exactly to the quadratic points in the statement of the proposition. bib 1 D. Abramovich and J. Harris, Abelian varieties and curves in W_d (C), Compositio Math. 78 227-238 (1991) 1.1 N. Adžaga, V. Arul, L. Beneish, M. Chen, S. Chidambaram, T. Keller, B. Wen, Quadratic Chabauty for Atkin–Lehner Quotients of Modular Curves of Prime Level and Genus 4, 5, 6 <https://arxiv.org/abs/2105.04811>, to appear, Acta Arithmetica. 1.1a N. Adžaga, T. Keller, P. Michaud-Jacobs, F. Najman, E. Ozman, B. Vukorepa, Computing Quadratic Points on Modular Curves X_0(N), <https://arxiv.org/abs/2303.12566>, 2023. 1.2 V. Arul, S. Müller, Rational points on X_0^+(125), <https://arxiv.org/pdf/2205.14744.pdf>, to appear, Edixhoven memorial volume of Expositiones Mathematicae. 1.3 J.S. Balakrishnan, A. J. Best, F. Bianchi, B. Lawrence, J.S. Müller, N. Triantafillou, J. Vonk, Two recent p-adic approaches towards the (effective) Mordell conjecture. in Arithmetic L-functions and differential geometric methods, 2021, 31–74. 4.2 J.S. Balakrishnan, N. Dogra, J.S. Müller, J. Tuitman, and J. Vonk. Explicit Chabauty-Kim for the split Cartan modular curve of level 13. Ann. of Math. (2), 189(3), 2019. 2 J.S. Balakrishnan, N. Dogra, J.S. Müller, J. Tuitman, J. Vonk, Quadratic Chabauty for modular curves: Algorithms and examples <https://arxiv.org/abs/2101.01862>, to appear, Compositio Mathematica. 3 J.S. Balakrishnan, A. Besser, F. Bianchi, J.S. Müller, Explicit quadratic Chabauty over number fields, Israel Journal of Mathematics, (2021), 1–48. 4 J.S. Balakrishnan, N. Dogra (with an appendix by J.S. Müller), Quadratic Chabauty and rational points I: p-adic heights, Duke Mathematical Journal, 167, no. 11 (2018), 1981-2038. BMcode J.S. Balakrishnan, B. Mazur, SageMath code, <https://github.com/jbalakrishnan/QC_bielliptic>, 2023. bmAWS J.S. Balakrishnan, J.S. Müller, Computational tools for quadratic Chabauty, Arizona Winter School Lecture notes 2020. 5 B.S. Banwait, Explicit isogenies of prime degree over quadratic fields International Mathematics Research Notices, rnac134, <https://doi.org/10.1093/imrn/rnac134> (2022) <https://arxiv.org/abs/2101.02673> BM B.S. Banwait, M. Derickx, Explicit isogenies of prime degree over number fields (2022), <https://arxiv.org/abs/2203.06009> BNP B.S. Banwait, F. Najman, O. Padurariu, Cyclic isogenies of elliptic curves over fixed quadratic fields (2022), <https://arxiv.org/abs/2206.08891> 6 F. Bars, Bielliptic Modular Curves, Journal of Number Theory 76, 154-165 (1999) 6.1 A. Besser, J.S. Müller, P. Srinivasan, p-adic adelic metrics and quadratic Chabauty I, <arXiv:2112.03873>, 2021. bianchi F. Bianchi, Quadratic Chabauty for (bi)elliptic curves and Kim's conjecture, Algebra & Number Theory 14(9): 2369-2416 (2020). bianchi-padurariu F. Bianchi, O. Padurariu, Rational points on rank 2 genus 2 bielliptic curves in the LMFDB, <arXiv:2212.11635>, 2022. 7 J. Box, Quadratic points on modular curves with infinite Mordell-Weil group, Mathematics of Computation 90 (2021), 321–343. bgg J. Box, S. Gajović, P. Goodman, Cubic and quartic points on modular curves using generalised symmetric Chabauty, International Mathematics Research Notices, Volume 2023, Issue 7, March 2023, 5604-5659, <https://doi.org/10.1093/imrn/rnab358> 8 P. Bruin, F. Najman, Hyperelliptic modular curves and isogenies of elliptic curves over quadratic fields, LMS Journal of Computation and Mathematics (2015) CHM L. Caporaso, J. Harris, B. Mazur Corrections to Uniformity of rational points and further comments, <https://arxiv.org/abs/2012.14461> CHM1 L. Caporaso, J. Harris, and B. Mazur, Uniformity of rational points. J. Amer. Math. Soc., 10 1-5 (1997) Col82 R.F. Coleman, Dilogarithms, regulators, and p-adic L-functions. Invent. Math., 69(2):171 – 208 (1982). Col85a R.F. Coleman, Torsion points on curves and p-adic abelian integrals. Ann. of Math. (2), 121(1):111–168 (1985). Col85b R.F. Coleman, Effective Chabauty. Duke Math. J., 52(3): 765–770 (1985). colemangross R.F. Coleman and B.H. Gross. p-adic heights on curves. In Algebraic number theory, volume 17 of Adv. Stud. Pure Math., pages 73–81. Academic Press, Boston, MA, 1989. CES B. Conrad, B. Edixhoven, and W. Stein, J_1(p) has connected fibers. Doc. Math. 8 331-408 (2003). Cor19 David Corwin. From Chabauty's method to Kim's non-abelian Chabauty's method. 2019. <https://math.berkeley.edu/ dcorwin/files/ChabautytoKim.pdf> D M. Derickx, Torsion points on elliptic curves over number fields of small degree. Several variations of Kamienny's criterion, <https://wstein.org/wiki/attachments/seminar(2f)nt(2f)20110318/slides.pdf> DEHMZ M. Derickx, A. Etropolski, M. van Hoeij, J. S. Morrow, D. Zureick-Brown, Sporadic Cubic Torsion, Algebra & Number Theory 15 (7) 1837 – 1864 (2021). EL B. Edixhoven and G. Lido. Geometric quadratic Chabauty. Journal of the Institute of Mathematics of Jussieu, 2021, 1–55. G1 S.D. Galbraith. Rational points on X^+_0(p) Experiment. Math.,8 (4) 311-318 (1999) G2 S.D. Galbraith. Rational points on X^+_0(N) and quadratic -curves. J. Théor. Nombres Bordeaux, 14(1) 205-219 (2002) GLQ J. Gonzalez, J-C. Lario, and J. Quer, Arithmetic of ℚ-curves, Progress in Mathematics, 224, 125-139 (2004) 9 S. Kamienny, Torsion points on elliptic curves over all quadratic fields, Duke Mathematical Journal, 53 157-162 (1986) 10 S. Kamienny, Torsion points on Elliptic Curves over all quadratic fields II, Bull. Soc. Math. de France. bf 114 (1986) 119-122 11 S. Kamienny, Torsion points on elliptic curves. Proceedings of the Conference on Number Theory, (1991) 12-15, GH Essen Preprint Series (G. Frey, ed.) (1991). 12 S. Kamienny, B.Mazur, Rational torsion of prime order in elliptic curves over number fields Astérisque, 228 (1995) 81-98 <http://www.numdam.org/item?id=AST_1995__228__81_0> kzb E. Katz, D. Zureick-Brown, The Chabauty-Coleman bound at a prime of bad reduction and Clifford bounds for geometric rank functions. Compos. Math. 149 (2013), no. 11 1818–1838 19 M. A. Kenku, The modular curve X_0(39) and rational isogeny, Mathematical Proceedings of the Cambridge Philosophical Society, 85, Cambridge University Press, (1979) 21-23. 20 M. A. Kenku, The modular curve X_0(169) and rational isogeny, Journal of the London Mathematical Society 2 (1980), no. 2, 239-244. 21 M. A. Kenku, The modular curves X_0(65) and X_0(91) and rational isogeny, Mathematical Proceedings of the Cambridge Philosophical Society, 87 Cambridge University Press (1980) 15-20. 22 M. A. Kenku, On the modular curves X_0(125), X_0(25), and X_0(49), Journal of the London Mathematical Society 2 (1981), no. 3, 415-427 12.5 M. A. Kenku, F. Momose, Torsion points on elliptic curves defined over quadratic fields, Nagoya Math. J. 109 (1988) 125-149 kim M. Kim, The unipotent Albanese map and Selmer varieties for curves. Publ. RIMS, 45:89 – 133, 2009. kimp1 M. Kim, The motivic fundamental group of 𝐏^1∖{0, 1,∞} and the theorem of Siegel. Invent. Math., 161:629 – 656, 2005. kimmassey M. Kim, Massey products for elliptic curves of rank 1. J. Amer. Math. Soc., 23(3):725 – 747, 2010. Kubert-Lang D. S. Kubert, S. Lang, Modular Units (Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, New York, NY, 1981), vol. 244. L D. Lorenzini, Torsion points on the modular Jacobian J_0(N). Compos. Math. 96, 149-172 (1995). MT B. Mazur, J. Tate, Points of order 13 on elliptic curves, Invent. Math. 22 41-49 1973. MSw B. Mazur, P. Swinnerton-Dyer, Arithmetic of Weil Curves, lnventiones math. 25 1-61 (1974) M0 B. Mazur, Rational points on modular curves. Proceedings of a conference on modular functions held in Bonn 1976. Lecture Notes in Math. 601 Berlin-Heidelberg-New York: Springer (1977) M1 B. Mazur, Modular curves and the Eisenstein ideal, Publ. Math. IHES 47 33-186 (1977) M2 B. Mazur, Rational Isogenies of Prime Degree, Inventiones math. 44 129-162 (1978) MR1 B. Mazur, K. Rubin (With an appendix by Michael Larsen) Diophantine stability Amer. J. Math. 140 571- 616 (2018) <https://doi-org.jproxy.lib.ecu.edu/10.1353/ajm.2018.0014> Mer L. Merel, Bornes pour la torsion des courbes elliptiques sur les corps de nombres, Inventiones Mathematicae 124 437-449 (1996), 13 P. Michaud-Rodgers, Quadratic points on non-split Cartan modular curves, International Journal of Number Theory 18 (2022), no. 2, 245-26. NV F. Najman, B. Vukorepa, Quadratic points on bielliptic modular curves, Mathematics of Computation, to appear. nekovar J. Nekovář, On p-adic height pairings. In Séminaire de Théorie des Nombres, Paris 1990-1991, pages 127– 202. Birkhäuser, 1993. 14 A. P. Ogg, Hyperelliptic modular curves, Bulletin de la S. M. F., 102 449-462 (1974) 14.1 A. P. Ogg, Diophantine equations and modular forms. Bull. Amer. Math. Soc. 81 14-27 (1975) 14.2 A. P. Ogg, On the cusps of Γ_0(N), Proceedings of the Number Theory Conference (Univ. Colorado, Boulder, Colo., 173-177 (1972) 14.3 A. P. Ogg, Rational points of finite order on elliptic curves. Invent. Math. 12 105-111 (1971) O1 M. Ohta, Eisenstein ideals and the rational torsion subgroups of modular Jacobian varieties J. Math. Soc. Japan 65 733-772 (2013). O2 M. Ohta, Eisenstein ideals and the rational torsion subgroups of modular Jacobian varieties II.Tokyo J. Math. 37 273-318 (2014). 15 E. Ozman, S. Siksek, Quadratic Points on Modular Curves, Math. Comp. 88 (2019), 2461 – 2484. P P. Parent, Bornes effectives pour la torsion des courbes elliptiques sur les corps de nombres, Journal für die reine und angewandte Mathematik (Crelle's Journal) (1996) R1 Y. Ren, Rational torsion subgroups of modular Jacobian varieties. J. Number Theory 190, 169-186 R2 Y. Ren, Quadratic torsion subgroups of modular Jacobian varieties, Israel Journal of mathematics, 245 675-710 (2021), RW K. Ribet, P. Wake, Another look at rational torsion of modular Jacobians, PNAS 2022 119 No. 41 <https://doi.org/10.1073/pnas.2210032119> 17 N. Schappacher and R. Schoof, B. Levi and the Arithmetic of Elliptic Curves, The Mathematical Intelligencer 18, 57-69 (1996) <https://irma.math.unistra.fr/ schappa/NSch/Publications_files/1996_RSchNSch.pdf> serre-galois J.-P. Serre. Topics in Galois theory, Volume 1 of Research Notes in Mathematics. Jones and Bartlett Publishers, Boston, MA, 1992. Lecture notes prepared by Henri Damon, With a foreword by Darmon and the author. sikseksymm S. Siksek, Chabauty for symmetric powers of curves, Algebra & Number Theory 3 (2009), no. 2, 209-236. siksek S. Siksek, Quadratic Chabauty for modular curves, <https://arxiv.org/pdf/1704.00473.pdf>, 2017. Si J. Silverman, The Arithmetic of Elliptic Curves, Springer-Verlag (1986) 16 A. V. Sutherland, Torsion subgroups of elliptic curves over number fields, <https://math.mit.edu/ drew/MazursTheoremSubsequentResults.pdf> takagi T. Takagi, The cuspidal class number formula for the modular curves X_0(M) with M square-free. J. Algebra 193, 180-213 (1997). Tr A. Trbović, Torsion groups of elliptic curves over Quadratic fields ℚ(√(d)), 0 < d < 100 (2018) <https://arxiv.org/pdf/1806.05993> Y H. Yoo, Rational torsion points on Jacobians of modular curves. Acta Arith. 172 299-304 (2016). Zan U. Zannier, Torsion in algebraic groups and problems which arise, To appear. zhang S. Zhang, Small points and adelic metrics. J. Algebraic Geom., 4(2):281-300, 1995.
http://arxiv.org/abs/2307.07631v1
20230714210159
Towards Model-Size Agnostic, Compute-Free, Memorization-based Inference of Deep Learning
[ "Davide Giacomini", "Maeesha Binte Hashem", "Jeremiah Suarez", "Swarup Bhunia", "Amit Ranjan Trivedi" ]
cs.LG
[ "cs.LG" ]
-0.25in Towards Model-Size Agnostic, Compute-Free, Memorization-based Inference of Deep Learning Davide Giacomini^1, Maeesha Binte Hashem^1, Jeremiah Suarez^2, Swarup Bhunia^3, and Amit Ranjan Trivedi^1 ^1AEON Lab, University of Illinois at Chicago (UIC), Chicago, IL, USA, ^2Illinois Mathematics and Science Academy, IL, USA, ^3University of Florida (UFL), FL, USA August 12, 2023 ================================================================================================================================================================================================================================================================================== The rapid advancement of deep neural networks has significantly improved various tasks, such as image and speech recognition. However, as the complexity of these models increases, so does the computational cost and the number of parameters, making it difficult to deploy them on resource-constrained devices. This paper proposes a novel memorization-based inference (MBI) that is compute-free and only requires lookups. Specifically, our work capitalizes on the inference mechanism of the recurrent attention model (RAM), where only a small window of input domain (glimpse) is processed in a one-time step, and the outputs from multiple glimpses are combined through a hidden vector to determine the overall classification output of the problem. By leveraging the low-dimensionality of glimpse, our inference procedure stores key-value pairs comprising of glimpse location, patch vector, etc. in a table. The computations are obviated during inference by utilizing the table to read out key-value pairs and performing compute-free inference by memorization. By exploiting Bayesian optimization and clustering, the necessary lookups are reduced, and accuracy is improved. We also present in-memory computing circuits to quickly look up the matching key vector to an input query. Compared to competitive compute-in-memory (CIM) approaches, MBI improves energy efficiency by ∼2.7× than multilayer perceptions (MLP)-CIM and by ∼83× than ResNet20-CIM for MNIST character recognition. Deep neural network; edge computing § INTRODUCTION Ultra-low-power edge inference of deep neural networks (DNNs) has revolutionized many application spaces, enabling edge devices to perform complex data-driven inference and real-time decision-making with minimal energy consumption. The edge inference of DNNs has opened up new avenues for applications such as wearables, smart homes, Internet-of-Things (IoT), cyber-physical systems, and many more<cit.>. By performing most computations at the data source, edge inference also helps mitigate privacy and security concerns by keeping sensitive information on local devices rather than transmitting it to remote servers. Additionally, edge computing helps to reduce network congestion and lowers carbon footprint by minimizing the need for data to be transmitted over long distances. DNNs are increasingly utilized in applications like autonomous insect-scale drones<cit.>, robotic surgery <cit.>, and cognitive assistants. However, improving their predictive capacity in complex signal spaces requires increasing the number of trainable parameters and network depth. For instance, GPT models have achieved remarkable performance but require enormous parameters, ranging from 175 billion in GPT-3 to 100 trillion in GPT-4. Edge-friendly models like MobileNetV2 <cit.>, ResNet50 <cit.>, and EfficientNet-B0 <cit.> offer some efficiency, but still demand significant computational resources. Limited resources on edge devices pose challenges in handling the growing complexity of deep learning models. Fundamentally, there could be two ways to perform inferential computations [Fig. 1]. The first approach involves processing all necessary arithmetics through modules, such as multipliers, adders, shifters, and other components, to obtain the resultant output. The second approach involves memorization where the resultant output is precomputed and memorized at all possible input combinations and thereafter retrieved during inference, obviating the need for any computations. Notably, the second approach becomes increasingly attractive as the workload of inferential computations increases. With the phenomenal growth of DNN model sizes and the number of model parameters reaching billions and trillions, the second approach might also be more memory-efficient by only storing a lookup table (LUT) of input-output (key-value) combinations than the DNN model parameters themselves. In this work, we pay closer attention to the above model-size agnostic memorization-based inference, i.e., MBI of DNNs to explore pathways for disruptive enhancement of edge inference. Specifically, our work makes the following contributions: * We introduce a novel memorization-based inference (MBI), which involves distilling a pre-trained model into a LUT to perform inference without requiring intensive arithmetics such as multiplications or additions. Instead, the inference process is compute-free, relying only on a sequence of key-value lookups on the distilled LUT for a given input query. * To improve the scalability of MBI, we demonstrate a novel framework combining recurrent attention mechanisms, Bayesian optimization-based optimal distance metric search, hierarchical clustering, and in-memory determination of the closest entry to an input. Recurrent attention mechanisms are leveraged to minimize the size of LUTs. Bayesian optimization of distance metrics improves prediction accuracy with incomplete tables. Hierarchical clustering minimizes the table size for each lookup. Finally, in-memory determination of the closest key to the input query improves the speed. * We characterize MBI for character recognition on the MNIST dataset under extremely low precision. The hidden state vector is quantized to one bit and the input patch vector to two bits. Specifically, we demonstrated a mixed-memorization-based inference where most low-complexity images are processed through memorization and fewer high-complexity images require full processing of traditional machine learning. Compared to competitive compute-in-memory (CIM) approaches, MBI improves energy efficiency by ∼2.7× than multilayer perceptions (MLP)-CIM and by ∼83× than ResNet20-CIM for MNIST character recognition. Sec. II introduces the opportunities and challenges for MBI. Sec. III details various components of the inference methodology and presents simulation results. Sec. IV concludes. § MODEL-SIZE AGNOSTIC, COMPUTE-FREE, MEMORIZATION-BASED INFERENCE (MBI) §.§ Opportunities and Challenges of Lookup-Only Inference Our approach is focused on developing an inference methodology that can make predictive workloads independent of the number of layers and model parameters. This would allow for complex predictions to be made within a constant time and memory budget. In Fig. 2, our MBI approach accomplishes this by distilling the model's predictions on query inputs into a key-value LUT, which requires only searching a matching key to query to make predictions, thereby avoiding intermediate feature extractions. Despite the potential for constant time/storage predictions, independent of the predictive model's architecture and parameters, naive memorization of even simpler prediction tasks results in extremely large LUT sizes that cannot be practically synthesized, stored, or inferred. For instance, consider character recognition on the MNIST dataset, where each image is 28×28 pixels <cit.>. Even if we consider a 2-bit representation of each pixel value in character images, a binarized vector representing the input image would be 2×28×28 = 1568 bits long, requiring a complete table with 1.04e+333 number of rows for all possible inputs! As the bit precision of input images (such as 8 bits per pixel) or the size of input images (such as 224×224 for cropped ImageNet images) increases, the size of memorization tables becomes even more exorbitant. Thus, although the potential benefits of constant time and memory predictions are evident under MBI, naive memorization is infeasible even on simpler predictive tasks. §.§ Proposed Methodology for Memorization-based Inference Our methodology employs several techniques to enhance the feasibility of constant time/storage-bound MBI. Fig. 2 presents the overview of the proposed methodology. Details on various components will be presented in the subsequent section. Primarily, we leverage the recurrent attention model (RAM) architecture of neural networks, introduced in <cit.>, where a recurrent neural network is integrated with attention mechanisms. The attention mechanism of RAM allows the network to assign different weights to different parts of the input so that it can selectively attend only to the most salient information of the input. The RAM is designed to learn to focus its attention on only a low-dimensional glimpse of the input image. In Fig. 2, for MBI, the RAM architecture minimizes the operating input dimension in each time-step (such as to only 3×3), thus enabling a significant reduction in the necessary LUT to make the inference scheme feasible. Secondly, we rely on incomplete tables for MBI where only the closest match, instead of an exact match, to an input query vector is required to determine the readout. The LUT size for MBI need not span the entire input space; instead, it can just be a subset of the input space. For example, under the LUT size budget of N rows, the input space can be sampled on N query points, and an incomplete table of N rows can be used for MBI. Enabling MBI from incomplete tables further improves the practicality of the procedure where storage resources can be explicitly accounted for. Furthermore, we explore Bayesian optimization to determine the optimal distance metrics to improve inference accuracy even with incomplete tables. Bayesian optimization can optimize expensive-to-evaluate functions, such as optimal distance metric in MBI, by building a probabilistic objective function model and iteratively selecting new points to evaluate based on the expected improvement in the model's performance. Our results in the next section indicate that Bayesian optimization-based optimal distance metrics can significantly improve the prediction accuracy by 3-4%. Finally, to enhance MBI's speed and energy efficiency, we utilize hierarchical clustering and analog-domain in-memory determination with flexible distance metrics. Hierarchical K-means clustering organizes table entries into a search tree, enabling quick search of a subset of the table at the leaf node for the closest entry to a query. Analog-domain in-memory computing circuits compute distances in parallel, using a winner-takes-all (WTA) approach for rapid retrieval. Performing computations within the memory array eliminates data movements, reducing energy and latency overheads. We analyze the impact of non-idealities, like transistor process variation, on inference accuracy in the analog circuit components at the leaf of the clustering tree. § COMPONENTS OF MEMORIZATION-BASED INFERENCE METHODOLOGY AND SIMULATION RESULTS §.§ Recurrent Attention Mechanisms for Downscaling LUTs Fig. 2 provides an overview of our RAM architecture for memory-based inference. The attention mechanism in the figure utilizes a glimpse network to extract a smaller window of the input image for further processing. Recurrence is achieved through a core network that processes the hidden state vector from the previous time step and outputs a new hidden state vector for the current time step. The location network computes a new location vector at each time step, enabling focus on specific image regions. High-resolution patches are extracted from the selected locations, progressively increasing in size at lower resolutions to widen the network's image coverage. These patches are stacked to form a patch vector, processed further through a linear layer. The glimpse network combines the glimpse vector with the previous time step's hidden state and feeds it into the core network. The core network output is propagated to the next time step and to the location and classification networks. The size of LUTs and the number of lookups in our MBI are influenced by RAM architecture parameters: glimpses, patch size, glimpse scale, and the number of patches. More glimpses lead to increased hidden state updates and, consequently, more MBI lookups. Larger patch sizes and more patches require longer key vectors, leading to larger LUTs and more rows for comprehensive key coverage. Increasing glimpse scale allows global image feature awareness but sacrifices granularity by compressing to a low-resolution window. To develop the MBI table, we fine-tuned hyperparameters and quantization levels to enhance memory efficiency. Fig. 3(a) demonstrates the impact of different hyperparameters on accuracy. Increasing patch size initially improves accuracy but saturates beyond a certain point. Glimpse scale increase reduces accuracy as it tries to capture the entire image in a few pixels. Similarly, accuracy saturation occurs with more glimpses and patches. Fig. 3(c) shows that adding one extra layer to the original network achieves the highest accuracy, while further layers decrease accuracy. Additionally, larger hidden state sizes improve accuracy by capturing more information. In Fig. 3(b), the accuracy remains relatively constant across the range of increasing patch size quantization bit sizes, except for a single-bit quantization. This deviation in accuracy can be attributed to a significant loss of information at the input level of the model due to 1-bit quantization. In contrast, the accuracy remains stable concerning the increase in bit size for hidden state vector quantization. §.§ Bayesian Optimization for Optimal Distance Metric To optimize the workload of MBI, we employ incomplete tables, where only a limited number of input combinations are stored based on available storage resources. Through randomly sampling the input space and capturing input-output episodes from the main model, we distill this information onto the MBI table. Consequently, instead of exact matches, we retrieve information from incomplete tables by searching for the closest match to an input query. To search for the optimal distance metric, we employed Bayesian optimization. A parameterized distance metric function between query 𝒬 and key 𝒦, 𝒟(𝒬,𝒦), based on weighted Manhattan distance was used as following: 𝒟(𝒬,𝒦) = a ·ℳ(q_p,k_p) + b ·ℳ(q_h,k_h) + c ·ℳ(q_l,k_l)/a+b+c Here, a, b, and c are learnable weighting parameters for patch vector (p), hidden state (h), and location vector (l) component of the key. ℳ() is the Manhattan distance function. q_p, q_h, and q_l are patch, hidden, and location vector components of the query 𝒬. With similar subscript notations, the components of key 𝒦 are defined. In Fig. <ref>, Bayesian optimization-based optimally-weighted Manhattan distance metric improves the prediction accuracy by 3–4% across LUT sizes compared to unweighted distance. §.§ Synthesizing Lookup Tree by Hierarchical Clustering Fig. 4(b) depicts a hierarchical K-means clustering method used to organize key values in a tree structure. The objective is to minimize search load when finding the closest match to an input query in the LUT. Keys in the table are divided into clusters, and their centroids are determined. Sub-clusters are then formed through multi-level clustering, continuing until a threshold for each node's total number of elements is exceeded. The query is compared to cluster centroids during the query-matching process, directing it to the appropriate branch. This comparison process repeats with sub-cluster centroids until the query reaches a leaf node, which is exhaustively compared to a few key vectors to find the closest match. Fig. 4(c) shows the histogram of the distance normalized to one between the matching key vector to a random query using the hierarchical clustering-based approach compared to an exhaustive search throughout the table. The former has a significantly smaller workload. As can be seen, the hierarchical clustering-based approach finds a matching key vector comparable to the exhaustive search with a very high probability – the maximum distance between the matching keys in both cases is less than 10%. The MBI approach can also be integrated with traditional DNNs to improve accuracy. In Fig. 5, MBI is only applied when a matching key to the input falls within a distance threshold for each glimpse iteration. Otherwise, traditional DNN is employed on such harder-to-generalize inputs. In the figure, as the matching distance threshold increases, more data can be processed using MBI, however, at the cost of lower accuracy. §.§ In-Memory Search of Closest Key to Query We present an in-memory processing approach to efficiently search the closest key vector to an input query in Fig. 6(a). Integrating the search and storage in the same memory structure obviates the table-data movements to enhance efficiency. The operational sequence of the circuits is as follows: Multibit key vectors are stored row-wise, and query vectors are applied on the top ports in the figure. In step-1, the pre-charge (PCH) mechanism is activated to charge all bit lines (BL/BLB) according to the column's bit-significance factor. For example, considering the p-bit precision of the query and key vector, a column operating on the bit significance factor j ∈ [0, p-1] is precharged with V_P,max/2^p-j-1. V_P,max is the maximum precharge level. In Fig. 3(b), only a 2-bit precision of the patch vector provided sufficient accuracy; hence, we encode the patch component of a key vector with 2-bit precision. From Fig. 3(b), the hidden state component of the key vector is quantized to 1-bit for the discussed results. Therefore, in the implemented scheme to compute the distance of a 2-bit quantized patch component of query and key vector, the columns storing higher significance bits are precharged to V_P,max and the columns storing the least significant bits are precharged to V_P,max/2. In step-2, after selecting the required row of key-bit vectors, each memory cell computes the bitwise difference between the corresponding key bit (k) and the applied query bit (q): BL discharges only when k=1 and q=0; BLB discharges only when k=0 and q=1. Such column discharges are utilized for multi-bit Manhattan distance computations. For example, consider the Manhattan distance computation between n-element long key 𝒦 and query 𝒬. Under p-bit precision, 𝒦/𝒬 is processed on p × n columns. The Manhattan distance between the i^th element of 𝒦 and 𝒬 is given by |𝒦_i - 𝒬_i| = ∑_j=0^p-1|k_ij - q_ij| × 2^j. Here, k_ij & q_ij are the corresponding binary bits of 𝒦 and 𝒬. Therefore, if k_ij=1 & q_ij=0, the corresponding BL discharges and if k_ij=0 & q_ij=1, the corresponding BLB discharges. If k_ij=q_ij, BL & BLB maintain their precharge levels proportional to the bit significance factor j. In step-3, to calculate the sum of all the differences, the charge-sum (CSUM) is activated. This results in averaging BL charges on the SLP and BLB charges on SLN through transmission gates at the top. Therefore, V_SLP≈1/n × p∑_i=0^n-1∑_j=0^p-1(1-1_k_ij=1, q_ij=0) × V_P,j V_SLN≈1/n × p∑_i=0^n-1∑_j=0^p-1(1-1_k_ij=0, q_ij=1) × V_P,j Here, 1 is the indicator function that is one only when the identity in the subscript is true and zero otherwise. × V_P,j is the column precharge voltage for j^th significance bits. Therefore, the average voltage of SLP and SLN, (V_SLP + V_SLN)/2, follows the Manhattan distance of 𝒦 and 𝒬. In step-4, the average voltage is digitized and multiplied with Bayesian optimization learned weights. If a key vector cannot fit in one memory array, it can be partitioned and processed in parallel across several memory arrays. The weighted distance from all arrays is combined, and the minimum distance index is searched by serially scanning all stored key vectors. Fig. 6(b) shows the SLP/SLN voltage distribution under the process variability while considering minimum-sized NMOS and PMOS with σ_VTH = 60 mV) on 32×32 bitcell array. Since the minimum sum line voltage difference (ΔSL) is at least 28 mV, the analog output can be accurately digitized with a 5-bit ADC. Fig. 6(c) shows the distribution of energy among various operations. ADC's energy is estimated from <cit.>. Peripherals and precharge energy is simulated using HSPICE based on 16 nm-LSTP predictive technology models in <cit.>. The energy dissipation of digital logic operations in step-4 is estimated based on <cit.>. One key/query-vector matching operation on 32×32 consumes ∼4.7 pJ energy in our 16 nm design. §.§ Compute-in-Memory vs. Memorization-based Inference Compute-in-memory (CIM) has become a predominant approach to improve the energy efficiency of deep learning by leveraging the same memory structure for storage and computations <cit.>. Table I compares both paradigms, CIM and MBI, for the MNIST characterization test case, which differ in many key aspects: Firstly, unlike CIM, MBI only constrains input bit precision; weights in MBI need not quantize. The underlying RAM architecture can be simulated at full precision to distill LUTs. Secondly, MBI is agnostic to necessary multiply-accumulate (MAC) operations, a key metric of various neural network architectures. Specifically, complex models such as ResNet20 in the table can generalize better to more complex tasks but demand many MACs. MBI is more suited for the distillation of such complex models.In our approach, we have applied threshold value to determine the amount of data to be processed by the MBI. When the threshold value is set to be highly stringent, only a small number of images undergo processing by MBI, resulting in nearly perfect accuracy. However, if the threshold value is set to be less stringent, a larger proportion of images are processed by MBI, but the accuracy may slightly decrease. Thirdly, MBI has a constant storage overhead that doesn't grow proportionally to predictive model complexity. In Table I, for MNIST, the storage overheads of MBI are significantly worse than CIM; however, for more complex models, MBI can be significantly more storage efficient by avoiding the storage of model parameters. Fourthly, by avoiding computations and utilizing only lookups, MBI achieves significantly lower energy per input image inference (Energy/Inference in Table I) even when the other works are optimistically projected to 16 nm (see comments in the table). MBI requires higher energy than one-bit weight CIM design <cit.>; however, the applicability of single-bit weight models is only limited to simple tasks. More complex tasks, such as object localization or ImageNet classification, require sufficiently high precision. § CONCLUSIONS We have introduced a novel memory-based inference approach that allows model size-agnostic inference under constant time and storage budget. Our method condenses predictions from a trained model into LUT. During inference, the LUT is searched for the closest matching key vector to a given input glimpse. By performing memorization-based predictions on multiple glimpses, the final prediction is obtained. Compared to competitive compute-in-memory (CIM) approaches, MBI improves energy efficiency by ∼2.7× than multilayer perceptions (MLP)-CIM and by ∼83× than ResNet20-CIM for MNIST character recognition. IEEEtran
http://arxiv.org/abs/2307.05608v1
20230710220655
RényiTester: A Variational Approach to Testing Differential Privacy
[ "William Kong", "Andrés Muñoz Medina", "Mónica Ribero" ]
cs.CR
[ "cs.CR" ]
Programmable XY-type couplings through parallel spin-dependent forces on the same trapped ion motional modes Rajibul Islam August 12, 2023 ============================================================================================================= Governments and industries have widely adopted differential privacy as a measure to protect users' sensitive data, creating the need for new implementations of differentially private algorithms. In order to properly test and audit these algorithms, a suite of tools for testing the property of differential privacy is needed. In this work we expand this testing suite and introduce , an algorithm that can verify if a mechanism is Rényi differentially private. Our algorithm computes computes a lower bound of the Rényi divergence between the distributions of a mechanism on neighboring datasets, only requiring black-box access to samples from the audited mechanism. We test this approach on a variety of pure and Rényi differentially private mechanisms with diverse output spaces and show that detects bugs in mechanisms' implementations and design flaws. While detecting that a general mechanism is differentially private is known to be NP hard, we empirically show that tools like provide a way for researchers and engineers to decrease the risk of deploying mechanisms that expose users' privacy. § INTRODUCTION In the past decade, there has been an explosion of data driven technologies such as automated chat bots, medical image classifiers and face recognition systems. As these technologies become more ingrained in our everyday lives, society is realizing that sharing data with these technologies, even in aggregate, may pose privacy risks. With this realization, regulators and tech companies have had to update their systems to handle data in a privacy safe manner. At the same time, users expect technology to be automated and frictionless. This automation is generally data-driven, putting both goals of usability and privacy seemingly at odds. Luckily, the concept of differential privacy <cit.> has demonstrated that high quality statistical information or machine learning models can still be generated without compromising the privacy of any individual user. At the heart of differential privacy is the concept of a mechanism. A mechanism ℳ is a randomized function that maps a dataset D to an object, such as a set of statistics or a machine learning model. Differential privacy quantifies how much any individual user in the dataset affects the output of a mechanism, and this quantification is measured by the privacy budget ϵ. The smaller ϵ is the less each user affects the outcome of the mechanism and, hence, the less information about specific users may be leaked from the output of the mechanism. This intuition is formalized by bounding the distance between the distributions of the output of ℳ on two neighboring datasets D and D'. More formally, this is the distance between the distribution of random variables ℳ(D) and ℳ(D'), where D' is a dataset obtained from D by adding or subtracting a single record. The introduction of differential privacy to the research community has revolutionized the world of statistics and machine learning. Research in this field has been prolific and the community has shown that almost any learning task can be done in a differentially private manner. More importantly, mechanisms for these tasks are continuously being improved to extract the most utility, without compromising any privacy. It is in these improvements that one of the issues of differential privacy is observed. Unlike other privacy notions, like k-anonymity, one cannot verify if a mechanism is differentially private based only on a single output of a mechanism. Indeed, differential privacy is an information theoretical property of the mechanism that can only be verified by understanding the probability distribution over the space of outputs of a mechanism. This is straightforward when the mechanism is the well-known Laplace or Gaussian mechanism (albeit there are known errors in the implementation of even these mechanisms). However, as mechanisms become more accurate, the distributions generally become more complex. Fully understanding the distributions of such mechanisms becomes harder and errors on the analysis of such distributions (or errors in the implementations of such mechanisms) have occurred in the past <cit.>. In some of these scenarios, mechanisms that were asserted to be differentially private at a certain privacy budget level ϵ turned out to be either private at a different level or not private at all. As these mechanisms get deployed into real-world systems, it is important for researchers and regulators to verify the privacy claims of their mechanisms. Ideally, given a privacy budget ϵ, there would be a system that takes, as input, the implementation of a mechanism and validates that the mechanism is differentially private at the asserted level of ϵ. The stochastic nature of differential privacy makes this difficult, since verifying differential privacy requires bounding the distance between two distributions, which is generally hard to estimate. In this paper we propose a tester for detecting if a mechanism satisfies so called Rényi differential privacy (RDP) guarantees <cit.>. RDP provides some advantages over approximate (ϵ, δ)-differentialy privacy. For one, it provides a better understanding of the privacy properties of the Gaussian mechanism by smoothly quantifying the probability of failing to achieve privacy. Moreover, its composability properties makes it a great tool for calculating overall privacy budgets of iterative algorithms such as the celebrated differentially private stochastic gradient descent (DP-SGD). Indeed, popular open source privacy accounting libraries <cit.> are implemented with RDP as their backbone. For this reason we believe that Rény DP tester would be of the utmost importance to the privacy community and to the best of our knowledge, this is the first proposed such tester. As an added benefit, we show how a Rényi differential privacy tester can be used to test ϵ-differential privacy. Finally, we believe that estimating lower bounds of the Rényi divergence is of independent interest to the statistics community <cit.>. Another contribution of our work comes from the use of Bayesian optimization methods to find neighboring datasets D and D' for which the privacy guarantee is violated. This approach allows a user to not only discover whether a mechanism is private, but also provides information about the type of datasets for which the mechanism leaks the most information. Previous work either ignores this <cit.> or tests only on grids containing extremal datasets <cit.>. Our experiments show that in some cases the privacy violation does not occur in an extremal dataset. The rest of the paper is organized as follows. First, we introduce the necessary concepts to derive our statistical test, then we discuss previous work on testing of differentially private mechanisms. We then proceed to introduce our test and its theoretical guarantees. Finally, we conduct extensive empirical evaluation to demonstrate that a) our distance estimator performs very well in practice and b) known privacy bugs can easily be detected using our tester. By open sourcing our tester we hope to provide a tool to researchers to easily verify the implementation of private mechanisms. § PRELIMINARIES Notation. : ^n → denotes a mechanism that receives an input dataset D ⊆^n with n records and domain ⊆ℝ^p and outputs a statistic y ∈⊆ℝ^d. §.§ Differential privacy and Renyi divergence Differential privacy <cit.> quantifies the level of risk that a user is exposed to when they contribute their data to a randomized mechanism. We formalize this concept in <ref> Datasets D,D' are called neighbors, denoted by D ∼ D', if D can be obtained from D' by adding or removing one record from D. A randomized mechanism : ^n → satisfies (ϵ, δ)–approximate differential privacy, or is (ϵ,δ)–differentially private ((ϵ,δ)–DP), if for every pair of neighboring datasets D and D' and every set O ⊆ in the output space, we have P((D) ∈ O) ≤ e^ϵP((D') ∈ O) +δ We say satisfies pure differential privacy, or is ϵ–differentially private (ϵ–DP), when δ=0. An interpretation of differential privacy suggests that a mechanism is private if the distance between the distributions of (D) and (D') is small (relative to ϵ and δ). Under this interpretation, novel notions of privacy have emerged by introducing different ways of measuring divergences between distributions. Notably, the Rényi divergence <cit.> (which we define below) has become a popular choice when analyzing the privacy properties of mechanisms such as DP-SGD <cit.>. Let (Ω, ) be an arbitrary measurable space. Let P and Q denote two probabilities in (Ω, ). We assume that P is absolutely continuous with respect to Q [A measure P is absolutely continuous with respect to Q if for every set A⊂Ω such that Q(A) = 0 then P(A) = 0.] and let dP/dQ denote the Radon-Nykodym derivative of P with respect to Q. For α>0, the Rényi divergence of order α between P and Q is given by D_α(P||Q) = 1/α-1ln∫(dP/dQ)^α dQ We now make two remarks about the above definition. First, as α↓ 0, the quantity D_α(P||Q) tends to the well-known Kullback–Leibler (KL) divergence. Second, when P and Q admit density functions p, q respectively, the above expression is equivalent D_α(P||Q) = 1/α - 1ln∫(p(x)^α/q(x)^α-1)dx We will abuse the notation sometimes for random variables X ∼ P and Y ∼ Q we will denote D_α(X || Y) = D_α(P||Q). Using this divergence, we can introduce the notion of Rényi differential privacy <cit.>. A randomized mechanism : ^n → satisfies (α,ϵ)–Rényi differential privacy if for every pair of neighboring datasets D and D', we have D_α((D)||(D')) ≤ϵ The next two results present some important properties about D_α(P||Q). Let 1<α_1<α_2 and P and Q be probability measures. Then D_α_1(P||Q) < D_α_2(P||Q) Let be an ϵ–differentially private mechanism and α>1. Then D_α((D)||(D'))≤min{ϵ, 2 αϵ^2}. α=∞ corresponds to pure-DP, i.e., is an ϵ–DP mechanism if and only if for any D∼ D', we have D_∞((D) || (D') ≤ϵ. § RELATED WORK There are generally two kinds of approaches used in differential privacy testing. The first approach uses adversarial attacks that try to break the privacy definition, like membership inference attacks <cit.> and data reconstruction attacks <cit.> of deep learning models trained with DP-SGD. Hence, the validation of whether a mechanism satisfies privacy is linked to the ability of the attack to succeed. The tests generated by these approaches are very valuable when trying to understand potential privacy risks on a single data set, by manually designing canaries that are expected to have highest sensitivity. However, they do not attempt to understand the worst case (unknown) scenario that differential privacy tries to protect. Running these tests generally requires white box access to the trained model and, more importantly, requires access to large portions of the training data, making auditing of a privately trained model impossible for someone who is not the data curator. Consequently, the resulting lower bounds from these approaches tend to be loose <cit.>. Moreover, the budget ϵ predicted by these experiments is generally much smaller than the theoretical budget. For example, some authors assert that their proposed models were private with an ϵ = 10^-3 when these models were trained without privacy. The second approach, that contains our proposed method, attempts to directly estimate the effective privacy parameters from black-box access to the tested mechanism and compare these effective privacy parameters with the ones stated by the privacy guarantee. This approach focuses on estimating the distance between the distribution induced by the mechanism in two different datasets. However, two key challenges arise: 1) how do we estimate the distance between distributions given two fixed neighboring datasets? and 2) how do we find the pair of neighboring datasets that maximize the distance between these distributions? The problem of estimating distance between distributions has been thoroughly studied in the statistics and hypothesis testing community. While providing a full overview of the literature in this space is beyond the scope of this work, we do highlight <cit.> which consider estimating probability distances through optimization methods over function spaces. Their work provided asymptotic guarantees while we provide strict finite sample complexities to obtain a lower bound on the Rényi divergence between two distributions. For the specific task of estimating the Rényi divergence, our estimator is inspired by the work of <cit.> which considers using neural networks to estimate Rényi divergence. The finite sample complexity bounds provided in that work, however, depend on the structure of the neural network and can rapidly become vacuous for the purpose of testing differential privacy. In contrast, our complexity bounds are independent of the network structure as we are primarily concerned with lower bounds on the Rényi divergence. In a related approach, <cit.> proposes to estimate the regularized kernel Rényi divergence, a lower bound on the Rényi divergence between distributions of a randomized mechanism. However, this approach requires knowledge about the covariance matrix of the underlying distributions, which is impractical for most mechanisms other than the Gaussian and Laplace mechanisms. Recent work on tight estimation of the privacy loss distribution <cit.> provides techniques for lower-bounding ϵ, and in some cases it can be tighter. Unfortunately, the previous method needs access to the cumulative distribution function of the distribution of the privacy loss random variable, which is precisely unknown in our considered setting. There is also a large body of literature pertaining to the testing of a mechanism's privacy, which we briefly go through here. <cit.> proposes a differential privacy tester for mechanisms with discrete and finite output, requiring access to the distribution over datasets and the probability measure over outputs induced by the tested algorithm. Instead of testing privacy in the worst case setting, they test if the mechanism satisfy the guarantee over datasets with high probability. More importantly the tester does not work for continuous output spaces. StatDP <cit.> proposes a system for detecting differential privacy violations by post-processing the output of the mechanisms through different statistics. The tester requires semi-black box access to the mechanisms (as one of the post processing techniques requires running the mechanism without privacy), which is infeasiable for auditing certain systems. <cit.> presents a test for discrete (ϵ, δ)-DP mechanisms but omits the problem of finding the worst case pair of neighboring datasets. DP-Sniper <cit.> provides an ϵ-DP tester that tries to explicitly find a set in the output space that maximizes the difference in probability for the output of the mechanism. The choice of neighboring datasets, however, is done using some hard-coded rules that may hinder the ability to detect violations on new tasks, and under non-classic neighboring relations llike the ℓ_∞ relation instead of the classic swap or add/remove definition of neighboring. Their framework is also specific to detecting ϵ-DP, as low probability events are hard to estimate. In contrast, our mechanism estimates RDP, which averages out low probability events. Moreover, we use our estimates to inform the search of worst case datasets through a Bayesian optimizer. <cit.> proposes a similar approach but targets specifially auditing the privacy of DP-SGD. <cit.> extends the work of <cit.> by developing data poisoning attacks to explore the space of datasets, focusing on machine learning predictive models learning algorithms rather than arbitrary statistical tasks. § RÉNYI TESTER In this section, we propose , an RDP and ϵ-DP (or pure DP) tester that is able to find instances where non-private mechanisms do not satisfy the privacy guarantee that they claim to have. While the sample complexity to prove that a mechanism satisfies pure ϵ-DP can be exponentially large <cit.>, we use several heuristics that help detect mechanisms that are not private. We start by providing an overview of followed by a derivation of the algorithm's subroutines. We finish by proving a sample complexity bound that ensures the test results are valid with high probability. We introduce in <ref>. The tester receives, as input, (i) black-box access to the tested mechanism , (ii) a value ϵ if validating ϵ-DP or a tuple (α, ϵ) if validating RDP, and (iii) a probability of failure β. It then proceeds as follows: * Generate neighboring datasets (line <ref>). This is done according to the process discussed in <ref>. * Generate samples from mechanism. Given the datasets, the tester generates samples for the mechanism for each dataset. * Obtain a lower bound for the Rényi between both samples. The details of the estimation process are described in Section <ref> and through Corollary <ref>. * Detect if the mechanism violates privacy. Specifically, use the bound in Lemma <ref> with Corollary <ref>. §.§ Variational formulations We now present estimator for a lower bound on the Rényi divergence of two distributions. Our estimator relies on a variational formulation of the Rényi divergence. The first such formulation is a special case of the problem of calculating f divergences via convex optimization <cit.>, and the formulation that we use (described below) is the one recently proposed by <cit.>. Let α>1, and P and Q be probability measures on (Ω, ). Let Γ be any function space such that M_b(Ω) ⊆Γ⊆ M(Ω) where M_b(Ω) and M(Ω) are the sets of measurable bounded and measurable functions on Ω respectively. Then, D_α(P||Q) = sup_g ∈Γαα-1log(P e^(α-1)g(X))- log(Q e^α g(X)) Exact computation of the supremum in <ref> is generally hard, given that the complexity of the function space can be arbitrarily large for general distributions. We propose to relax this definition in two ways that allow us to derive a lower bound on the Rényi divergence. First, we fix a space of functions Φ⊆Γ. By restricting the search space for the supremum, the obtained value will be a lower bound on the real divergence. For example, one can define Φ as the set of functions generated by dense neural networks with bounded outputs. Second, we estimate the expectations using approximate (empirical) measures from samples, P_n, Q_n. While this last step introduces estimation error, this error can be bounded with high probability, thus allowing us to find a confidence interval for the lower bound. Let h:⊆Ω→ℝ be a function in Φ on Ω and α>1. Define R^h_α(P||Q) := αα-1log( ∫ e^(α-1) h(x)dP)- log( ∫ e^α h(x)dQ) and, given samples X_1,..., X_n ∼ P, Y_1,...,Y_n ∼ Q, define its empirical counterpart R^h,n_α(X||Y) := αα-1log( 1/n∑_i=1^n e^(α-1) h(X_i))- log( 1/n∑_i=1^n e^α h(Y_i)). The next section derives a sample complexity bound to quantify the estimation error err(n, δ) := |R^h,n_α(X||Y)-R^h_α(P||Q)| with probability 1-δ. Note that with the error function, we can provide a lower bound to the true Rényi divergence between P and Q as follows: for h_0 ∈Φ and M_b(Ω) ⊆Γ⊆ M(Ω), we have R_α(P || Q) = sup_h ∈Γ R_α^h(P ||Q) ≥sup_h ∈Φ R_α^h(P || Q) ≥ R_α^h_0 ( P ||Q) ≥ R^h,n_α(X||Y)- err(n, δ). §.§ Sample complexity The following theorem derives a technical inequality that every bounded mechanism satisfies with high probability for all neighboring datasets (cf. line <ref>). We provide a proof in the supplementary material. Let P and Q be two distributions. Let h Ω⊆→ℝ be a function such that sup_x ∈Ω h(x) < C, 𝐱 = (x_1,...,x_n) and 𝐲 = (y_1,...,y_n) be n realizations of P and Q, respectively, μ_1 = Pe^(α-1)h(x), and μ_2 = Qe^α h(x). Define also M_1 = e^(α-1)C and M_2=e^α C. Then, if γ∈ [0, min(M_1/μ_1, M_2/μ_2)], and n ≥max(3M_1log(2/β)/μ_1γ^2, 2M_2log(2/β)/μ_2γ^2), with probability at least 1-β, we have R_α^h (P||Q) ≥ R^h,n_α(𝐱 || 𝐲) - log(1+γ/1-γ) Our sample complexity is dimension independent. On the other hand, there are results showing that sample complexity of estimating the Rényi divergence from samples is lower bounded by e^d, where d is the dimension of the distribution output space. Our result does not contradict this fact because we are not estimating the true Rényi divergence, but a lower bound of the divergence. As the dimensions of the mechanism increases, one could expect that a more complex space of functions is required for in the definition of the lower bound. The next result shows how our estimate R^h,n_α(xy) is used as a lower bound for the true Rényi divergence. Let h Ω⊂→ℝ be a function such that sup_x ∈Ω |h(x)|≤ C, M denote a mechanism and D, D' be two neighboring databases., 𝐱 = (x_1, …, x_n) be a sample from (D) and 𝐲 = (y_1, …, y_n) be a sample from (D'), and β > 0 and γ be defined as in <ref>. If n is chosen according to <ref>, then with probability at least 1 - β, we have D_α((D) || (D')) ≥ R_α^h,n(𝐱||𝐲) - log(1 + γ/1-γ). §.§ Selection of function h The previous section showed that we can choose a function h to lower bound the Rényi divergence between the output of a mechanism in two neighboring datasets. It remains to show how to select the function that obtains the tightest lower bound. In this section we provide a natural heuristic for choosing h. Fix C > 0 and let Φ denote a collection of functions bounded by C. We propose the following two step approach. First, sample 𝐱 = (x_1, …, x_n) from (D) and 𝐲 = (y_1, …, y_n) from (D'). Let h^* be defined by h^* = _h ∈Φ R_α^h(𝐱||𝐲). Second, given h^*, generate a new sample 𝐱' = (x'_1, …, x'_n) from (D) and 𝐲' = (y'_1, …, y'_n) from (D'), and use <ref> on this sample to obtain a lower bound on the true Rényi divergence. The process just described corresponds to lines <ref>–<ref> in <ref>. It is also worth mentioning that the above approach is somewhat similar to DP-Sniper <cit.>. Specifically, the latter approach uses a training sample to find a set where the DP guarantee can fail and then use a test sample to estimate the actual privacy violation. Model considerations. Even though the model complexity does not appear in the sample complexity of our mechanism, it is important to constrain the model class as our heuristic only makes sense when R_α^h.n(𝐱||𝐲) and R_α^h.n(𝐱'||𝐲') are close. §.§ Dataset generation One of the main difficulties of testing for differential privacy is the worst-case nature of differential privacy guarantees. Namely, to prove a mechanism is not private, one has to find a dataset where inequality (<ref>) or (<ref>) fails to hold. We propose to use black-box optimization to find datasets that maximize R^h,n(X||Y). Specifically, assuming that we have access to R_α^h,n: (D,D') ⊆×→ℝ, our goal is to produce a sequence (D_t,D_t')_t that approaches the optimum. In our case, we only need to generate a point (D,D') where line <ref> does not hold. Available techniques include pure exploration methods, such as grid search, and techniques that use prior information to trade between exploration and exploitation that can accelerate the optimization, such as evolutionary methods. We refer the reader to <cit.> for an overview. In our experiments we will use an open-sourced implementation of the well known Bayesian optimization software Vizier. § EXPERIMENTS This section presents numerical experiments for . We first demonstrate how can be used to detect pure differential privacy guarantees. We then focus on RDP violations and specifically look into two common errors in DP-SGD implementations. We include in the supplementary an analysis on the accuracy of estimating Rényi divergence. [The code for running the experiments will be open sourced at publication time. ] Throughout our exposition, we let ε>0 and n ≥ 1 be fixed and X∈ℝ^n denote the input dataset. Pure DP mean mechanisms. The first three mechanisms attempt to privately compute the mean by generating the random estimates (X) := ∑_i=1^n X_i/ñ + ρ_1, (X) := ∑_i=1^n X_i/n+ ρ_2 (X) := ∑_i=1^n X_i/n+ ρ_1 where ñ = max{10^-12, n + τ}, τ∼ Laplace(0,2/ε), ρ_1 ∼ Laplace(0,2/[ñε]), and ρ_2 ∼ Laplace(0,2/[n ε]). The first estimate satisfies ϵ-DP, the second one violates the guarantee because it has access to the private number of points, and the third one privatizes the number of points to estimate the scale of the noise added to the mean statistic but the mean itself is computed using the non-private number of points. Sparse vector technique mechanisms. The next six mechanisms address different private and non-private implementations of the sparse vector technique (SVT), a mechanism for releasing a stream of c queries on a fixed dataset. SVT mechanisms compare each query value against a threshold and the given algorithm returns certain outputs for a maximum number of queries c. We denote these by – and they correspond to Algorithms 1-6 in <cit.>. and satisfy ϵ-DP. satisfies (1+6c/4)-DP, and ,, and do not satisfy ϵ-DP for any finite ϵ. Rényi DP mean mechanisms. To verify the ability of our tester to detect violations of Rényi differential privacy we first instantiate , a non-private Gaussian mean analog of Non-Private-Mean1 that uses the true number of points to compute the mean and noise scale, but adds Gaussian noise instead of Laplace noise. DP-SGD mechanisms. We also include two flawed DP-SGD's <cit.> implementations. Recall that DP-SGD is parametrized by a clip norm G (which clips individual per-example gradients to have ℓ_2 norm G) and a noise multiplier σ, and that a single iteration of DP-SGD is guaranteed to be (α, ϵ)-RDP for ϵ = 2α/σ^2G^2. The first implementation simulates a scenario where a developer assumes they are using a noise multiplier σ_theory but in reality uses a noise multiplier σ_effective. We dub this scenario . For the second implementation, we consider an accounting error when using batch or micro-batch clipping instead of per-example clipping in DP-SGD. Per-example clipping is memory and computationally expensive when training high-dimensional models. To address these constraints at the cost of utility, practitioners split a batches of size n into m microbatches of size n/m, compute average gradients over each micro-batch, clip and noise the per-microbatch gradient, and finally average the resulting noisy micro-batch gradients. It sometimes goes unnoticed but the sensitivity of per-microbatch gradients is 2G instead of G. below refers to an implementation of a DP-SGD optimizer that receives a model f_θ, learning rate, noise multiplier σ, clip norm value G, number of micro-batches, and takes a DP-SGD with noise scaled by σ G respect to the parameters θ, and does privacy accounting using a library that receives batch size, number of epochs, noise multiplier, assuming per-example clipping and ignoring the of microbatch clipping. The final budget should be ϵ = 2α/σ^2 but by ignoring microbatching results in the misleadingly stricter guarantee of ϵ = α/2σ^2. Baselines. We compare 's auditing capacity first with the the approximate differential privacy tester () presented in <cit.>. For completeness we introduce this algorithm as <ref> in the supplement. For a fixed pair of neighboring datasets, the algorithm estimates from samples the probability z of the algorithm violating a pure ϵ-differential privacy guarantee (line <ref>). If the mechanism is (ϵ, δ)–differentially private, then z<δ up to estimation error η (line <ref>). We also compare our method with DP-Sniper <cit.>. Recall that the original DP-Sniper paper uses different neighboring relationships for different mechanisms. Below we compare the methods under the same neighboring relationships to elucidate the power of these testers under similar conditions. DP-Sniper is generally unsuited for RDP, hence we do not include a comparison in the experimental section for non-pure DP mechanims. The introduced in <cit.> is similar to but requires knowledge of certain covariance matrices that are generally not known a priori. Consequently, we do not compare with this test in our auditing experiments, but do compare it with in the estimation of Rényi divergence between Gaussian distributions in <ref>. Methodology. We run tester with Φ being the class of functions generated by a two-layer dense neural networks consisting of 100 units for each hidden layer. To ensure the output of the network is bounded we use a scaled hyperbolic tangent loss activation scaled to C=16ϵ for the last layer. proposes its own grid of test cases to generate pairs of datasets. and are run on trials by generating pairs of neighboring datasets using an open sourced version of Vizier <cit.>, with an underlying NSGA-II evolutionary algorithm <cit.>. This method performed slighly better than a a random search algorithm, but obtaining similar speed of detection, or no detection at all (see <ref>). We test each mechanism for different values of ϵ and α, and test 5 times for each mechanisms. We found that both and had different estimator values over the five runs but the outcome (False or Passed) was consistent across runs. Pure DP results. The results of our experiments are summarized in Table <ref>. is able to detect all one-dimensional non-private mechanisms while the fails to detect , and is not defined for high dimensional output spaces, and cannot apply it to sparse vector technique algorithms. misses and but catches all the errors for at least a pair of parameters (α, ϵ). DP-Sniper suceeds at detecting the same mechanisms that . However, it requires 10M samples while only needs 400K samples. Rényi DP results. detects all errors while the does not, even when varying the outcome's space discretization size. It does so by evaluating less than 10 pairs of neighboring datasets (we present average number of trials in the appendix). DP-Sniper does not apply in this setting. In the appendix we further investigate the potential of to detect 's implementation for different values of σ_effective. presents an example where exploring extremal datasets is not useful for catching privacy violations but our dataset generation technique can find pairs of datasets violating the privacy constraint on an average of 5 trials. In this case, assuming gradients are in the [-2,2] interval, and assuming a clip norm of G=1, the privacy violation occurs at datasets neighboring datasets D={-1 } and D'={-1,2 }, where the sensitivity of the clipped averaged gradient is 2 and not at neighboring datasets D={-2} and D={-2,2 } where the sensitivity is 1. It is important to highlight that our implementation for detecting errors for higher values of σ_effective is mostly limited due to the cap C used to define the space Φ. This capping parameter noticeably delivers smaller divergence estimates making it harder to find privacy leaks. Unfortunately, increasing this constant substantially increases the required Ω(e^α C) sample size. In the following section we find that removing this cap provides very accurate estimation for Gaussian distributions. We leave tightening the sample complexity as future work. The high sample complexity for measuring divergence distribution seems to be universal. In <ref> we add the number of samples for , , and . requires at least one order of magnitude less than baselines and does not need a discretization parameter m. § DISCUSSION We presented a new test for detecting privacy violations that is suited to pure and Rényi differential privacy and, hence, is able to detect flaws in non-private mechanisms. While failing to detect a few pure differential privacy leaks, it appears to be the first one to test Rényi differential privacy guarantees with only black-box access to the mechanism. We highlight that our tester is particularly flexible and that it can easily be improved as we derive better sample complexity bounds for variational approaches of Rényi divergence estimators. As demonstrated in <ref>, there is still a noticeable gap between the theoretical and practical error bounds on these estimates. We leave possible theoretical improvements as a future area of research. plain § ADDITIONAL PROOF DETAILS Below we introduce Chernoff's multiplicative bound, that we use in the proof of theorem <ref> Let X_1,..., X_n be independent random variables drawn according to some distribution with mean μ and support in [0,M]. Then for any γ∈ [0,M/μ-1] the following inequalities hold: P(1/n∑_i=1^n X_i ≥(1+γ) μ) ≤ e^-nμγ^2/3M P(1/n∑_i=1^nX_i ≤(1-γ) μ) ≤ e^-nμγ^2/2M [<ref>] Let P and Q be two distributions. Let h Ω⊆→ℝ be a function such that sup_x ∈Ω h(x) < C, 𝐱 = (x_1,...,x_n) and 𝐲 = (y_1,...,y_n) be n realizations of P and Q, respectively, μ_1 = Pe^(α-1)h(x), and μ_2 = Qe^α h(x). Define also M_1 = e^(α-1)C and M_2=e^α C. Then, if γ∈ [0, min(M_1/μ_1, M_2/μ_2)], and n ≥max(3M_1log(2/β)/μ_1γ^2, 2M_2log(2/β)/μ_2γ^2), with probability at least 1-β, we have R_α^h (P||Q) ≥ R^h,n_α(𝐱 || 𝐲) - log(1+γ/1-γ) From Chernoff's multiplicative bound ( <ref>, <ref>) we know that for γ_1∈[0, M_1/μ_1-1], with probability less than e^-nμ_1γ_1^2/3M_1, we have 1/n∑_i=1^n e^(α-1)h(x_i)/𝔼 [e^(α-1)h(X)] ≥ (1+γ_1) This implies that log[ 1/n∑_i=1^n e^(α-1)h(x_i)] - logPe^(α-1)h(X) ≥log(1+γ_1) ≥α-1αlog(1+γ), or equivalently, αα-1 log1/n∑_i=1^n e^(α-1)h(x_i) -αα-1logPe^(α-1)h(X) ≥log(1+γ_1) Note that by setting n≥3e^(α-1)Clog(2/β)/μ_1 γ_1^2 the above bound holds with probability at most β/2. A similar analysis (using <ref>) shows that for n≥2e^α Clog(2/β)/μ_2 γ_2^2 with probability at most β/2 the following bound holds: logQe^α h(y) -log[1/n∑_i=1^n e^α h(y_i)] ≥log(1/1-γ_2). Finally, summing <ref> and <ref>, and using the union bound, with probability 1-β we have that R^h,n(𝐱||𝐲) - log(1/1-γ_2) - log(1+γ_1) ≤ R^h(P || Q). The proof follows by letting γ = min(γ_1, γ_2). § EXPERIMENT DETAILS §.§ Approximate DP tester <cit.> We include the approximate DP tester in §.§ Further comparison with DP-Sniper Below we include a more complete comparison against on SVT mechanisms with add/remove and ℓ_∞ neighboring relations and with different sample complexities. only has an adavntage over when using the ℓ_∞ relation and using at least 10 million samples. For the more common add/remove definition has the same performance as . §.§ implementation details Function class. For all auditing mechanisms we used as the underlying function class Φ the family of fully connected neural network with two dense layers, each with 100 units. To ensure h ∈Φ are bounded but contain the real value of the divergence (ϵ for which we test) we add scaled hyperbolic tangent activations scaled to 16ϵ. Privacy parameters. Below we show results with different selections of hyperparameters ϵ and α used for auditing mechanisms. The range of ϵ and α was selected based on sample complexity sizes that allowed us to run the tests in an efficient manner. Besides , we notice that results tend to be consistent across the selections of these parameters. §.§ Exploring the space of datasets Average number of trials to detect privacy violations. In <ref> we provide details on the number of datasets our algorithm needs to test before finding a dataset where the privacy guarantee is broken. Random Search vs. NSGA-II In <ref> we show the number of trials ran by our algorithm before finding a dataset where a privacy violation occurs. We compare random and NSGA-II algorithms. §.§ DP-SGD mechanisms details All DP-SGD mechanisms train a simple model minimizing the loss function ℓ(x_i^d_i=1, w) = ∑^d_i=1w*x_i. Below we investigate the potential of to detect 's implementation for different values of σ_effective. For this, we estimated the Rényi divergence varying σ_effective on fixed neighboring datasets D = {-2,2 }, D' = {-2}. <ref> shows that we detect the error for σ_effective≤ 3. §.§ Renyi divergence estimation methods This subsection compares the estimator used by the <cit.> and the one proposed in this work in <ref>. Given a positive-definite kernel k(·,·) and some λ > 0, the approach in <cit.> considers estimating the regularized kernel Rényi divergence, a variant of the Rényi divergence in which its usual input distributions P and Q are replaced by Σ_P and Σ_Q + λ Id, respectively, where Σ_F denotes the k-covariance operator of a distribution F. Specifically, this estimator replaces Σ_P and Σ_Q with their empirical estimates. It is then shown that an O(1/n) sample complexity obtained for the empirical estimator, and that the (exact) Rényi divergence variant provides a lower bound on the classic Rényi divergence. Both methods have a theoretical error bounded by O(1/√(n)). We sample from two gaussian distributions with different means where the exact value of the divergence is known, namely P = (0,σ), Q = (μ, σ), and D_α(P||Q) = αμ^2/2σ^2. We set μ = σ = 1 and test for values of α = 1.5, 2., same values used in our auditing tests. We calculate the estimates five times and report the average over runs. As mentioned in the previous section, our estimator differs from the one used in the tester as we remove the final activation of our neural network and we allow the network to generate unbounded predictions. While theoretically we cannot use this estimator to detect violations of privacy, it is important to understand its empirical performance. In <ref>, we show the results for estimating the Rényi divergence with using 100 and 1000 samples. Observe that increasing the number of samples does not improve the quality of estimation. Further, it significantly harms the performance of due to its Ω(n^2) computational complexity. We plot the true Rényi divergence in blue. depends on the regularization parameter λ in the x–axis; as pictured, the estimator is highly sensitive to this value. λ=0.001 achieves the best performance when working with 100 samples, while achieving the worst for 1000 samples. <ref> shows the estimated divergence using (<ref>). Given that we do not need a confidence interval as in the previous section we work with an unbounded neural network. We observe that with 10K samples we can achieve tight estimates with small variance.
http://arxiv.org/abs/2307.05085v1
20230711073530
A self-sustaining mechanism for Internal Transport Barrier formation in HL-2A tokamak plasmas
[ "W. H. Lin", "J. Garcia", "J. Q. Li", "S. Mazzi", "Z. J. Li", "X. X. He", "X. Yu" ]
physics.plasm-ph
[ "physics.plasm-ph" ]
APS/123-QED ^1Southwestern Institute of Physics, Chengdu 610041, China. ^2CEA, IRFM, F-13108 Saint Paul-lez-Durance, France. ^bAuthor to whom correspondence should be addressed: [email protected] The formation of Internal Transport Barrier (ITB) is studied in HL-2A plasmas by means of nonlinear gyrokinetic simulations. A new paradigm for the ITB formation is proposed in which different physics mechanisms play a different role depending on the ITB formation stage. In the early stage, fast ions, introduced by Neutral Beam Injection (NBI) ion system, are found to stabilize the thermal-ion-driven instability by dilution, thus reducing the ion heat fluxes and finally triggering the ITB. Such dilution effects, however, play a minor role after the ITB is triggered as electromagnetic effects are dominant in the presence of established high pressure gradients. We define the concept of ITB self-sustainment, as the low turbulence levels found within the fully formed ITB are consequences of large scale zonal flows, which in turn are fed by a non-linear interplay with large scale high frequency electromagnetic perturbations destabilized by the ITB itself. A self-sustaining mechanism for Internal Transport Barrier formation in HL-2A tokamak plasmas W. H. Lin^1, J. Garcia^2, J. Q. Li^1,b, S. Mazzi^2, Z. J. Li^1, X. X. He^1, X. Yu^1 August 12, 2023 ============================================================================================= § INTRODUCTION The final goal of magnetic confinement devices is to confine plasmas of high temperature and density for sufficiently long time in order to produce economically advantageous fusion energy. Confined plasmas can be severely degraded by the outward energy transport driven by micro-instabilities such as the Ion-Temperature-Gradient (ITG) mode <cit.>. Therefore, a credible path towards reliable energy fusion production must rely on mechanisms controlling such an energy transport. Plasmas with Internal transport barriers (ITB) <cit.>, characterized by a suppression of heat transport driven by microturbulence leading to high core temperatures and densities, have been shown to provide a way to improve plasmas energy confinement in various tokamaks <cit.>. The formation and characteristics of ITB have been extensively studied. Several physical mechanisms have been put forward to explain energy transport reduction or suppression within an ITB. One of the initial mechanisms proposed was the E× B flow shear turbulence stabilization (see <cit.> for example), which manifests itself by breaking up turbulent eddies and reducing the amplitude and cross phase of turbulent fluctuations. In this context, negative or low magnetic shear is also known to have a synergistic effect with E× B shear on ITB formation, as it weakens the drive of some unfavourable instabilities <cit.> on one hand and prevents the detrimental effects brought by E× B shear <cit.> on the other. Other mechanisms related to the presence of highly energetic fast ions have been proposed as well. A large fraction of fast ions produced from neutral beam injection (NBI) are found to be crucial in ITB formation by their dilution effects <cit.>, while a small minority of them could also be decisive through mechanisms such as linear resonant interaction with ITG <cit.> or the enhancement of α-stabilization <cit.>.Despite the amount of studies devoted to clarify the physical mechanism behind the ITB formation, there are still aspects that remain unclear, e.g., whether a single physical mechanism or multiple ones are responsible for the ITB triggering and whether such mechanisms play significant roles on the ITB sustainment once it is fully formed. Clarifying these aspects is essential in order to properly evaluate whether plasmas with ITBs will be possible in future fusion reactors, for which some mechanisms, such as the E× B shearing produced by external injected torque, are known that will be less efficient. In this work, it is shown that the triggering and sustainment of ITB rely on two different physical mechanisms depending on the ITB formation stage. Whereas the ITB triggering is found to be a consequence of the NBI fast-ion dilution, it is proposed the concept of self-sustainment of the ITB as it is the ITB itself producing the physical mechanisms that provides its sustainment. The increase of electromagnetic (EM) effects in the presence of strong ITB-generated pressure gradients reduces turbulence and transport through the onset of large scale zonal flows <cit.> (with toroidal number n=0 and frequency ω=0), which tap energy non-linearly from large scale MagnetoHydroDynamics (MHD) fluctuations that are destabilized by the ITB itself. Meanwhile, the E× B shearing generated by the plasma rotation is not found to play a major role on the ITB formation. Such findings may pave the way for the formation of ITBs in future tokamaks as long as EM effects are dominant. Turbulence and transport analyses are performed with state-of-the-art gyrokinetic simulations for the ITB discharge #22453 in the HL-2A tokamak <cit.>. In such a discharge as shown in Fig. <ref>, the ITB is triggered at about t=510ms, and stably sustained for a time window of about 250 ms. During the ITB formation, the core ion temperature has increased from 1.0 to 2.3 keV, forming a region of large R/L_T_i (≈20) with the ITB foot located at ρ_tor≈ 0.4. Here, R is the major radius, L_T_i the inverse logarithmic gradient of ion temperature and ρ_tor the normalized square root of toroidal magnetic flux. The profiles at 510 and 650 ms, when the ITB begins to trigger and has been fully developed, respectively, are of particular interest to our study and provide the parameters set for the simulations discussed below. Shortly after the ITB triggering, the Mirnov coils detect two perturbations, a weak one at 70 kHz and a stronger one at 20 kHz in the laboratory frame, the latter being identified as a long-lived mode (LLM) in previous work <cit.>. LLMs, as well as fishbone (FB) instabilities <cit.>, are both MHD modes frequently observed in HL-2A after ITB triggering. Although FBs are proposed as the key factors of the ITB formation in some tokamaks <cit.>, they could hardly be related to the ITB triggering in HL-2A <cit.> where FBs are less observed preceding the ITB. As for LLMs, a limited amount of works <cit.> exist regarding their effects on ITB. The dynamic interplay between LLM and ITB remains obscure so far and will be investigated further in this work. The structure of this paper is arranged as follow: after the simulation setup is addressed in section 2, the dominant instabilities in various simulation conditions are analyzed in section 3, and the stabilizing factor that is of vital importance on ITB formation is investigated by the analysis of ion heat flux in section 4. It will be shown that the full ITB formation benefitted from not only the linear stabilization of dominant instabilities but also the nonlinear EM effect, which is attributed in section 5 to the onset of zonal flow through the saturation of large scale EM modes such as the aforementioned LLM. Finally in section 6, the mechanisms governing the ITB formation on different stages are concluded and a full picture of ITB’s self-sustainment is proposed. § SIMULATION SETUP All simulations reported in this paper are performed with the first-principle gyrokinetic code GENE <cit.> in flux-tube version. The simulated flux-tube is at ρ_tor,0=0.25, slightly inside the ITB foot. Here the subscript ‘0’ indicates flux-tube location. Miller geometry <cit.> is extracted from EFIT equilibrium. An extended region of low but positive magnetic shear ŝ is observed inside the ITB foot, and at the simulated location ŝ=0.12 with the safety factor q_0=1.05. Typical grid parameters are as follows: perpendicular box sizes [L_x,L_y]=[272,218] in units of ion Larmor radius ρ_i with discretizations [n_x,n_y]=[768,48], n_z=32 points in parallel direction, 32 points in parallel velocity directions and 32 magnetic moments. Here, x is the radial coordinate defined as x=aρ_tor (a the minor radius), y the binormal coordinate and z the coordinate along the field line. When the effects of the perpendicular flow shear are considered, large aspect ratio and circular poloidal cross-section are assumed and therefore the normalized mean E× B shearing rate is defined as γ_E≡(ρ_tor,0/q_0)(dΩ/dρ_tor)/(c_s/R). Ω is the toroidal angular velocity, R the major radius and c_s the sound speed. The full impact of γ_E is considered in non-linear simulations only, in order to ensure the compatibility of the E × B algorithm <cit.> implemented in GENE. Other physical parameters are shown in Table <ref>. With the aim of analyzing the individual effects of fast ion, finite-β and γ_E, simulations are divided into subsets with or without some of these parameters, and they are performed at both the ITB triggering time, 510 ms, and when the ITB is well-developed at 650 ms. Note that the tuple (n_fi, β, γ_E ) is frequently used in the following figures to indicate the simulation conditions. § INSTABILITIES The frequencies and growth rates of the most unstable modes in linear simulations are presented in Fig. <ref>. At both 510 and 650 ms, the spectra are dominated by the electrostatic (ES) ITG modes, which are characterized by frequencies in direction of ion diamagnetic drift and peaks at binormal wave number k_yρ_i ≈ 0.3 (or equally toroidal number n ≈ 12). It can be observed, comparing the cases with and without β, that ITG is stablized by the well-known linear finite-β effects <cit.>. Furthermore, fast ions exert another damping effect on ITG. This damping effect arises from the dilution of main ion species, which act as the driven force of ITG, and thus reduces ITG growth rates by a factor scaling with the fast-ion concentration n_fi/n_e <cit.>. After the ITB is well developed at 650ms, modes at toroidal number n=1 and n=2, with frequencies higher than those of ITG modes, are found destabilized without the contribution of the fast ions but rather as a consequence of the combined influence of steep R/L_T_i, low ŝ and finite-β. It was shown <cit.> previously that these EM modes have linear properties in good agreement with those of Beta-induced Alfvén Eigenmode (BAE) <cit.>. As analyzed by linear simulations therein<cit.>, these BAEs are mainly destabilized by the thermal ion temperature gradient with the critical value as R/L_T_i|_critical=1/q_0 √(7/4+T_e/T_i)ω_tr/ω_*n_iR/L_n_i, where ω_tr=√(2T_i/m_i)/(q_0R) is the thermal ion transit frequency and ω_*n_i is the density part of the ion diamagnetic drift frequency ω_*p_i=k_y T_i (R/L_n_i+R/L_T_i)/(e B_t R). In brief, the distinct property of these modes is that their frequencies scale with both the transit and diamagnetic drift frequency of thermal ion, ω∼ω_tr∼ω_*p_i, nearly independent on the characteristic parameters of fast ion. Destabilized above a relatively low critical β, their mode structures in ballooning representation, unlike that of ITG modes which localized within small ballooning angle, have not only a large extension over the ballooning angle but also small scale variations with characteristic length of the order of β^1/2. To gain an insight of the nonlinear characteristic of the instabilities, Fourier transforms are applied to fluctuating ES potential ϕ_1 in the saturated phase of nonlinear simulations, and the results of several typical cases are averaged spatially and presented in Fig. <ref>. The zonal component of ϕ_1, being the most prominent modes with zero frequency, are neglected in these spectra to highlight modes at n ≠ 0. Two patterns of spectrum are generally observed comparing Fig. <ref>(a) and (b). For those cases where finite-β and large R/L_T_i are not jointly present, the nonlinear spectra are consistent with the linear results. As shown by the representative case in Fig. 3 (a), the peaks of the Fourier amplitudes coincide with the linear frequencies of most unstable modes, with bandwidths arising from nonlinear scattering of dominant modes or coexisting subdominant ones. For such cases, no modes are found prominent in frequency range different from those of ITG modes, except that the bandwidth, as can be seen in Fig. <ref>(c) is broadened with fintie γ_E. However, when finite-β is considered in the presence of large R/L_T_i, high frequency modes appear apart from the ITGs. Those at n=1 and n=2 are the aforementioned BAEs with frequencies of around 40 and 55 kHz respectively, while those at higher n have exponentially low amplitudes. It can be seen that, if the rotation frequency f_tor≈ 6.7 kHz is considered with f=f_lab-nf_tor, BAE at n=2 with f ≈ 55 kHz corresponds to the mildly destabilized perturbation f_lab≈ 70 kHz in Fig. <ref>. When γ_E is retained, as can be seen in Fig. <ref>(d) where the Fourier spectrum are averaged further over toroidal numbers, modes at n=1 appear with frequencies of f ≈ 14 kHz, very close to the frequency of n=1 LLM in stationary frame. While finite γ_E is essential for LLM to appear on one hand, fast ions act as a non-resonant energy source for its destabilization <cit.> on the other. As can be seen in Fig. <ref>(d), n=1 LLM could appear but would be dominated by n=1 BAE, if the contribution of fast ions were excluded. Most importantly, the presence of a significantly large R/L_T_i is indispensable for the destabilization of LLM. § ION HEAT FLUXES The toroidal spectrum of the flux-surface-averaged ion heat fluxes, computed from the saturated phase of nonlinear simulations in unit of gyroBohm (gB ≡ n_e T_e^5/2 m_i^1/2 / (e^2 B_t^2 a^2)), are presented in Fig. <ref>(a) and (c). For comparison, they are integrated over toroidal number n and plotted in the right panels as a function of the quasi-linear ion heat fluxes calculated from the corresponding linear growth rates according to <cit.>. In addition, a model constant 𝒞 is determined by the ratio of non-linear to quasi-linear flux level of the full case at 510 ms, and a dashed line indicating the prediction of quasi-linear model Q_i=𝒞Q^qs_i is shown in Fig. <ref>(b) and (d). Quasi-linear theory contains no information of the exact nonlinear couplings between instabilities, typically the couplings with zonal components. Therefore, comparison between quasi-linear and non-linear fluxes disentangles the non-linear behaviours of the instabilities from the linear ones, and thus serves as an indicator of the nonlinear effects. In the same panels, the ion heat fluxes calculated by the transport code ONETWO <cit.> are shown by dash-dotted lines to indicate the power balance level. Also, the heat fluxes of fast ions are typically negligible compared to those of thermal ions, and therefore omitted in the following analysis. At 510 ms, it can be seen from Fig <ref>.(a) that the fluxes are significantly reduced when fast ions and/or finite-β are included and that the effect of fast ions is more effective for the reduction. Comparing the full case and the case without any factor, the fluxes are found to be reduced by about 78%, to which fast ion alone contribute about 90% . Fast ions in our cases have large density but relatively low pressure gradient, destabilizing no extra mode as analyzed above. Their introduction in our simulations merely cause a dilution of the fraction of thermal ions, the latter being the main drive of ITG responsible for most of the fluxes. Such fast-ion dilution effect is closely pertinent to tokamak with low plasmas density and high NBI power, and is recognized as the key factor for the triggering of ITB at 510 ms. Another noteworthy point is that, as can be seen in Fig. <ref>(b), the non-linear fluxes are highly predictable by quasi-linear theory from the growth rates in the corresponding linear cases. This indicates that the mechanisms behind the flux reduction at 510 ms mainly lies in the linear stabilization effects of both fast-ion dilution and finite-β as shown in Fig. <ref> (a).The quasi-linear prediction are reliable also at 650 ms before finite-β is included. In Fig. <ref>(c), ITG modes are fully destabilized due to steep R/L_T_i and the fluxes driven by them, peaking around n=6 or k_y ρ_i=0.15, would reach 50 gB in total if all the stabilizing factors were discarded. The effect of fast-ion dilutions are not sufficient to suppress such fluxes. Instead, it is only when finite-β is retained that the fluxes are drastically reduced. More importantly, the flux reduction caused by finite-β in nonlinear simulations largely overtakes what is predicted by quasi-linear model. It is seen in Fig. 4(d) that the effect of finite-β alone can reduce the total 50 gB fluxes to 7.5 gB, deviating from the quasi-linear prediction by about 74%. When all the other factors are considered along with finite-β, the total fluxes are eventually reduced to around 2.7 gB. Non-linear finite-β effect is thus identified as the dominant stabilizing effects during the sustainment phase of ITB. The underlying mechanism was investigated previously in <cit.> and attributed to an energy transferring enhanced by finite-β between the flux-driven ITG modes and zonal flows. As will be shown below, a link between the flux reduction and the prominent growth of zonal flows is indeed identified with finite-β, but the energy required for such growth are mainly tapped from n=1 EM modes instead of the ITG ones. The inclusions of both fast ion and finite-β have made a significant contribution to drawing the simulated heat fluxes near the power balance fluxes, but finite difference still exist between them at both 510 and 650 ms. Such differences may arise from the inevitable errors in measurement and parameters evaluations, but they could do little harm to our conclusions which rely mainly on the relative change of fluxes rather than on their absolute values under different simulation conditions. By sharp contrast to the beneficial role played by fast ion and finite-β, the effects of E × B shearing on the fluxes are barely visible at both 510 and 650 ms. It is found in Fig. <ref> that the retaining of finite γ_E slightly increased the fluxes, but such change is within the error bar. The ineffectiveness of γ_E was reported also in other tokamaks <cit.> and can be simply explained by its low value, which in our case is only about one third of the growth rates of dominating ITG modes (if compared in the same unit c_s/a). Therefore, it is concluded that the E × B shearing has not direct impact on the ITB formation other than changing the characteristics of the n=1 EM modes, which, in spite of their dominant amplitudes in frequency spectrum, drive much less fluxes than their ITG companions. § FLUX REDUCTION AND ZONAL FLOWS The aforementioned discrepancies between non-linear and quasi-linear fluxes with finite-β are related to the effect of the zonal flows (ZF). To confirm this, the full time traces of total ion fluxes are shown in Fig. <ref>(a) for the cases with and without finite-β at 650 ms, labelled by EM and ES respectively, and in the same plot the ZF energies for the respective case are also displayed. Here, the field energy of each n is defined concerning only the ES part as E_n ≡∑_k_x∫ Jdz C_1 |ϕ_1|^2, where J is the Jacobian. A positive-defined real constant C_1(k_x,k_y,z) is included so that E_n corresponds to the field part of the free energy <cit.>, and could be substituted with other positive-defined real constant such as k_⊥^2. From Fig. <ref>(a) , it’s observed regardless of whether finite-β is retained or not, that the ion heat fluxes at first develop linearly to form the γ_n-dependent shape during the initial phase, the time window before t_1 when the amplitudes of ZFs are low, and that the peaks of the spectra begin to shift toward lower toroidal number as the ZFs continue to develop in the transient phase from t_1 to t_2. The corresponding flux spectra are shown in Fig. <ref> (b). At this phase, the heat flux spectra tend to evolve into the γ_n/k_⊥^2-dependent <cit.> quasi-linear shape with overall values predictable from the dashed line in Fig <ref>(d). For the ES case, the heat fluxes begin to saturate at the quasi-linear level. For the EM case, however, it is observed that the heat fluxes, instead of becoming saturated, continue to abate slowly as the ZF energy in such case is experiencing a persistent growth, whose cause is reported in the following. The time evolution of E_n naturally depends on the linear contributions and nonlinear ones, but only the latter contribute to the net growth of ZF energy E_0. Taking the time derivative of E_n and substituting ϕ_1 with the modified distribution function g_1 through the field equation, the nonlinear contribution to dE_n/dt is expressed as (see the Appendix) dE_n/dt|_NL≡∑_n^' T(n, n^',t) =Re∑_k_y^',k_x,k_x^' (k_x^'k_y- k_y^' k_x) ∫ Jdz M ϕ̅_1^* |_kχ_1j |_k^' g_1j |_k-k^', where M= ∑_j π n_j q_j ∫ dv_∥ d μ B_0 is a moment operator and χ_1=ϕ̅_1 - v_th,j v_∥A̅ _1,∥ + T_jμ / q_j B̅_1∥ the gyro-averaged effective potential. T(n,n^',t) is calculated focusing on the coupling between ZF (n=0) and all the other n^', and the results shown in Fig. <ref>(c) have been averaged over the time window when there is a net growth of ZF. It is thus seen that ZF has mainly drained energies from n=8 ∼ 12 ITG components when the low-n EM modes are artificially suppressed by neglecting finite-β effect. Instead, when finite-β is retained and the low-n EM modes are destabilized by the large R/L_T_i, ZF receives a significant positive portion of energy from these low-n modes, mostly from n=1 LLM, and develops to a much larger amplitude than that in the case without finite-β. Such favorable energy transfer is an evidence of the self-regulatory system where an EM mode, serving as a catalyst, transfers the free energy it obtained from the ITB-generated large R/_T_i to the ZF which helps mitigate the heat fluxes and in turn sustain the ITB. Consequently, a self-organized mechanism is proposed which is characterized by an energy transfer that is facilitated by the saturation of the low-n EM modes, in this case the LLM, and that results in the increase of ZF activities and reduction of heat fluxes. § DISCUSSION AND CONCLUSIONS The ITB characteristics in HL-2A have been analyzed by performing non-linear gyrokinetic simulations. The emphasis of our study have been placed on the effects of fast ions, finite-β and E × B shear. It is found that the complete ITB formation process can be conceptually divided into two stages where distinct mechanisms dominate. Widely effective as it is, the E × B shear stabilization in our cases is not found to play a remarkable role on any of these stages, mainly because the shearing rate γ_E is ralatively low compared to the ITG growth rates. On the first stage, the plasmas instabilities are dominated by ITG modes which are subjected to the stabilizing effects of both finite-β and fast ions. It is found that the triggering of ITB is mainly caused by the stabilization of ITG under the effect of fast-ion dilution which basically depend on linear physics. Once the ITB is fully developed, the sustaining of the ITB is determined by the reduction of heat turbulent transport by large scale zonal flows. On this second stage, the steep ITB-generated pressure gradient, combined with the effect of finite-β, is able to bring about an abundant varieties of large scale EM modes, in our case the LLM. Instead of driving significant fluxes, LLM acts as a catalyst that transfers the ITB free energy obtained during the triggering process to the zonal flows, which in turn mitigate the flux and sustain the ITB ultimately. The full ITB formation is therefore characterized as a self-regulatory multi-scale physics system leading to a self-sustained ITB. Although these conclusions are obtained from simulations which employ several simplifying model, such as the local assumption and Maxwellian fast ions, they provide an initial picture of the process of ITB formations. To validate our conclusions in further, the global gyrokinetic simulations that is much more demanding computationally may be necessary to rigorously account for the effect of large scale flow shear and to completely accommodate all the modes involved. Nevertheless, the mechanism proposed in this paper could be important, e.g. if LLM can be induced, e.g. by tailoring the q-profile, to future tokamak devices like ITER with low E × B shearing, which is not found to play a major role here on any stage of ITB formation. § ACKNOWLEDGEMENT The authors are very grateful to Mr. Chen Qian, Mr. Zhang Xing, Mr. Fang Kairui, Dr. Hao Guangzhou, Dr. Yu Deliang and the HL-2A experiment team for providing and processing experimental data. This work was supported by the National Natural Science Foundation of China with grant Nos. 12275071 and U1967206 and also partially by National Key R&D Program of China under Grant Nos. 2017YFE0301200 and 2017YFE0301201. § APPENDIX The derivations of Eq. <ref> are reported in the following. In flux-tube version of GENE, the normalized gyrokinetic Vlasov equation for the modified distribution function g_1j of species j can be written as dg_1j/dt = L_G χ_1j + L_C ( g_1j + q_j χ_1jF_0j/T_j ) + L_∥( g_1j + q_j χ_1jF_0j/T_j ) - {χ_1j, g_1j}_x,y, where L_G is the gradient prefactor, L_G=((-3/2 + v^2_∥ + μ B_0) R/L_T_j + R/L_n_j)F_0j i k_y, L_C the curvature prefactor, L_C =T_j( 2v^2_∥ + μ B_0)/q_jB_0 K_x i k_x +(-T_j( 2v^2_∥ + μ B_0)/q_jB_0 K_y + βT_j v^2_∥ p_0/q_j B_0^2R/L_p_0 )ik_y, and L_∥ the parallel-dynamic operator, L_∥=v_th,jF_0j/21/JB{1/F_0j, }_v_∥,z. Here, the Poisson bracket of two arbitrary function f and g over the variables u and v is defined as {f, g }_u,v=∂ f/∂ u∂ g/∂ v - ∂ f/∂ v∂ g/∂ u. When the nonlinear term in Eq. <ref> (the last term in the right hand side) is evaluated in Fourier space at (k_x, k_y, z), the multiplications in the Poisson bracket are transformed into convolutions, i.e. -{χ_1j, g_1j}_x,y|_k = ∑_k_x^',k_y^' (k_x^'k_y-k_xk_y^') χ_1j|_k^' g_1j|_k-k^'. Full magnetic fluctuations are considered in χ_1=ϕ̅_1 - v_th,j v_∥A̅ _1,∥ + T_jμ / q_j B̅_1∥, the gyro-averaged effective potential, where the bar over quantities indicates gyro-average. Note that in local limit the gyro-average of ES potential ϕ_1 is simply ϕ̅_1=J_0(k_⊥ρ_j)ϕ_1, where the J_n is the Bessel function of n^th order. The symbol |_k indicates the quatity before it is evaluated at (k_x,k_y). In the curvature term, the K_x and K_y are the curvature factor in radial and binormal direction respectively. Their definition, as well as those of other quantities, can be found in, e.g. ref. <cit.> and <cit.>. The field equation of the ES potential ϕ_1 is coupled with that of the parallel fluctuating magnetic field B_1∥ when finite-β is considered. The coupled field equations are C_1 ϕ_1 + C_2 B_1∥ = M J_0(k_⊥ρ_j) g_1j, C_2 ϕ_1 + C_3 B_1∥ = M 2J_1(k_⊥ρ_j)/k_⊥ρ_j)T_jμ/q_j g_1j, from which we obtain ϕ_1 = C_3/C_1C_3 - C_2^2 M J_0(k_⊥ρ_j) g_1j - C_2/C_1C_3 - C_2^2 M 2J_1(k_⊥ρ_j)/k_⊥ρ_j)T_jμ/q_j g_1j. The moment operator M is defined as M = ∑_j π n_j q_j ∫ dv_∥ d μ B_0, and the definitions of the coefficients C_1, C_2 and C_3 (which are real and only depend on k_x, k_y and z) can be found in Page 33 of ref. <cit.>. By Eq. <ref>, the total derivative of the ES energy at k_y can be expressed as d E_n/dt = ∑_k_x∫ Jdz C_1 (∂ϕ_1^*/∂ tϕ_1 + ϕ_1^*∂ϕ_1 /∂ t) = Re ∑_k_x∫ Jdz C_1 ϕ_1^*∂ϕ_1/∂ t = Re (∑_k_x∫ Jdz C_1/C_1C_3 - C_2^2ϕ_1^* × M( C_3 J_0(k_⊥ρ_j) - C_2 2J_1(k_⊥ρ_j)/k_⊥ρ_jT_jμ/q_j )∂ g_1j/∂ t . As the nonlinear contribution alone is of our concern, we can obtain, by substituting the ∂ g_1j/∂ t in Eq. <ref> with only the nonlinear term Eq. <ref>, dE_n/dt|_NL= Re∑_k_y^',k_x,k_x^'(k_x^'k_y-k_y^' k_x)∫ Jdz M ϕ_1^* |_k × (C_1 C_3/C_1C_3 - C_2^2 J_0(k_⊥ρ_j) - C_1C_2/C_1C_3 - C_2^22J_1(k_⊥ρ_j)/ k_⊥ρ_j) ×χ_1j|_k^' g_1j|_k-k^', where the commutativity between the moment operator M and any spatial quantities has been used. As the coefficient C_3 is inversely proportional to β which is closed to zero for our case, the second term in Eq. <ref> makes negligible contribution to the total value. Therefore, taking the limit C_3 →∞, Eq. <ref> can be obtained from Eq. <ref>. But note that Eq. <ref> instead of <ref> is actually used to produce Fig. <ref> (c) for the sake of completeness.
http://arxiv.org/abs/2307.05785v1
20230711202546
Making the Nyström method highly accurate for low-rank approximations
[ "Jianlin Xia" ]
math.NA
[ "math.NA", "cs.LG", "cs.NA", "15A23, 65F10, 65F30" ]
Making the Nyström method highly accurate for low-rank approximationsFor review. The research of Jianlin Xia was supported in part by an NSF grant DMS-2111007. Jianlin XiaDepartment of Mathematics, Purdue University, West Lafayette, IN 47907 ([email protected]). August 12, 2023 ================================================================================================================================================================ The Nyström method is a convenient heuristic method to obtain low-rank approximations to kernel matrices in nearly linear complexity. Existing studies typically use the method to approximate positive semidefinite matrices with low or modest accuracies. In this work, we propose a series of heuristic strategies to make the Nyström method reach high accuracies for nonsymmetric and/or rectangular matrices. The resulting methods (called high-accuracy Nyström methods) treat the Nyström method and a skinny rank-revealing factorization as a fast pivoting strategy in a progressive alternating direction refinement process. Two refinement mechanisms are used: alternating the row and column pivoting starting from a small set of randomly chosen columns, and adaptively increasing the number of samples until a desired rank or accuracy is reached. A fast subset update strategy based on the progressive sampling of Schur complements is further proposed to accelerate the refinement process. Efficient randomized accuracy control is also provided. Relevant accuracy and singular value analysis is given to support some of the heuristics. Extensive tests with various kernel functions and data sets show how the methods can quickly reach prespecified high accuracies in practice, sometimes with quality close to SVDs, using only small numbers of progressive sampling steps. Jianlin XiaHigh-accuracy Nyström methods high-accuracy Nyström method, kernel matrix, low-rank approximation, progressive sampling, alternating direction refinement, error analysis 15A23, 65F10, 65F30 § INTRODUCTION The Nyström method is a very useful technique for data analysis and machine learning. It can be used to quickly produce low-rank approximations to data matrices. The original Nyström method in <cit.> is designed for symmetric positive definite kernel matrices and it essentially uses uniform sampling to select rows/columns (that correspond to some subsets of data points) to serve as basis matrices in low-rank approximations. It has been empirically shown to work reasonably well in practice. The Nyström method is highly efficient in the sense that it can produce a low-rank approximation in complexity linear in the matrix size n (supposing the target approximation rank r is small). For problems with high coherence <cit.>, the accuracy of the usual Nyström method with uniform sampling may be very low. There have been lots of efforts to improve the method. See, e.g., <cit.>. In order to gain good accuracy, significant extra costs are needed to estimate leverage scores or determine sampling probabilities in nonuniform sampling <cit.>. Due to its modest accuracy, the Nyström method is usually used for data analysis and not much for regular numerical computations. In numerical analysis and scientific computing where controllable high accuracies are desired, often truncated SVDs or more practical variations like rank-revealing factorizations <cit.> and randomized SVD/sketching methods <cit.> are used. These methods can produce highly reliable low-rank approximations but usually cost O(n^2) operations. The purpose of this work is to propose a set of strategies based on the Nyström method to produce high-accuracy low-rank approximations for kernel matrices in about linear complexity. The matrices are allowed to be nonsymmetric and/or rectangular. Examples include off-diagonal blocks of larger kernel matrices that frequently arise from numerical solutions of differential and integral equations, structured eigenvalue solutions, N-body simulations, and image processing. There has been a rich history in studying the low-rank structure of these off-diagonal kernel matrices based on ideas from the fast multipole method (FMM) <cit.> and hierarchical matrix methods <cit.>. To obtain a low-rank approximation to such a rectangular kernel matrix A with the Nyström method, a basic way is to choose respectively random row and column index sets ℐ and 𝒥 and then get a so-called CUR approximation A≈ A_:,𝒥A_ℐ,𝒥^+A_ℐ,:, where A_:,𝒥 and A_ℐ,: denote submatrices formed by the columns and rows of A corresponding to the index sets 𝒥 and ℐ, respectively, and A_ℐ,𝒥 can be understood similarly. However, the accuracy of (<ref>) is typically low, unless the so-called volume of A_ℐ,𝒥 happens to be sufficiently large <cit.>. It is well known that finding a submatrix with the maximum volume is NP-hard. Here, we would like to design adaptive Nyström schemes that can produce controllable errors (including near machine precision) while still retaining nearly linear complexity in practice. We start by treating the combination of the Nyström method and a reliable algebraic rank-revealing factorization as a fast pivoting strategy to select significant rows/columns (called representative rows/columns as in <cit.>). We then provide one way to analyze the resulting low-rank approximation error, which serves as a motivation for the design of our new schemes. Further key strategies include the following. * Use selected columns and rows to perform fast alternating direction row and column pivoting, respectively, so as to refine selections of representative rows and columns. * Adaptively attach a small number of new samples so as to perform progressive alternating direction pivoting, which produces new expanded representative rows and columns and advances the numerical rank needed to reach high accuracies. * Use a fast subset update strategy that successively samples the Schur complements so as to improve the efficiency and accelerate the advancement of the sizes of basis matrix toward target numerical ranks. * Adaptively control the accuracy via quick estimation of the approximation errors. Specifically, in the first strategy above, randomly selected columns are used to quickly perform row pivoting for A and obtain representative rows (which form a row skeleton A_ℐ,:). The row skeleton is further used to quickly perform column pivoting for A to obtain some representative columns (which form a column skeleton A_:,𝒥). This refines the original choice of representative columns. Related methods include various forms of the adaptive cross approximation (ACA) with row/column pivoting <cit.>, the volume sampling approximation <cit.>, and the iterative cross approximation <cit.>. In particular, the method in <cit.> iteratively refines selections of significant submatrices (with volumes as large as possible). However, later we can see that this strategy alone is not enough to reach high accuracy, even if a large number of initial samples is used. Next in the second strategy, new column samples are attached progressively in small stepsizes so as to repeat the alternating direction pivoting until convergence is reached. Convenient uniform sampling is used since the sampled columns are for the purpose of pivoting. This eliminates the need of estimating sampling probabilities. The third strategy enables to avoid applying pivoting to row/column skeletons with growing sizes. That is, the row (column) skeleton is expanded by quickly updating the previous skeleton when new columns (rows) are attached. We also give an aggressive subset update method that can quickly reach high accuracies with a small number of progressive sampling steps in practice. With the forth strategy, we can conveniently control the number of sampling steps until a desired accuracy is reached. It avoids the need to perform quadratic cost error estimation. The combination of these strategies leads to a type of low-rank approximation schemes which we call high-accuracy Nyström (HAN) schemes. They are heuristic schemes that are both fast and accurate in practice. Although a fully rigorous justification of the accuracy is lacking, we give different perspectives to motivate and support the ideas. Relevant analysis is provided to understand certain singular value and accuracy behaviors in terms of both deterministic rank-revealing factorizations and statistical error evaluation. We demonstrate the high accuracy of the HAN schemes through comprehensive numerical tests based on kernel matrices defined from various kernel functions evaluated at different data sets. In particular, an aggressive HAN scheme can produce approximation accuracies close to the quality of truncated SVDs. It is numerically shown to have nearly linear complexity and further usually needs just a surprisingly small number of sampling steps. Additionally, the design of the HAN schemes does not require analytical information from the kernel functions or geometric information from the data points. They can then serve as fully blackbox fast low-rank approximation methods, as indicated in the tests. The remaining discussions are organized as follows. We show the pivoting strategy based on the Nyström method and give a way to study the approximation error in Section <ref>. The detailed design of the HAN schemes together with relevant analysis is given in Section <ref>. Section <ref> presents the numerical tests, followed by some concluding remarks in Section <ref>. § PIVOTING BASED ON THE NYSTRÖM METHOD AND AN ERROR STUDY We first consider a low-rank approximation method based on a pivoting strategy consisting of the Nyström method and rank-revealing factorizations of tall and skinny matrices. A way to study the low-rank approximation error will then be given. These will provide motivations for some of our ideas in the HAN schemes. Consider two sets of real data points in d dimensions: 𝐱={ x_1,x_2,…,x_m} , 𝐲={ y_1,y_2,…,y_n} . Let A be the m× n kernel matrix A=( κ(x_i,y_j)) _x_i∈𝐱,y_j∈𝐲, which is sometimes also referred to as the interaction matrix between 𝐱 and 𝐲. We would like to approximate A by a low-rank form. The strong rank-revealing QR or LU factorizations <cit.> are reliable ways to find low-rank approximations with high accuracy. They may be used to obtain an approximation (called interpolative decomposition) of the following form: A≈ A_:,𝒥V^T with V=Q( [ I; F ]) , where Q is a permutation matrix, r≡|𝒥| (size or cardinality of 𝒥) is the approximate (or numerical) rank, and ‖ F‖_max≤ c with c≥1. c is a user-specified parameter and may be set to be a constant or a low-degree polynomial of m, n, and r <cit.>. We suppose r is small. The column skeleton A_:,𝒥 corresponds to a subset 𝐭⊂𝐲 which is a subset of landmark points. Here we also call 𝐭 a representative subset, which can be selected reliably by strong rank-revealing factorizations. A strong rank-revealing factorization may be further applied to A_:,𝒥^T to select a representative subset 𝐬⊂𝐱 corresponding to a row index set ℐ in A_:,𝒥. That is, we can find a pivot block A_ℐ,𝒥. Without loss of generality, we may assume |ℐ|=|𝒥|=r. (If the factorization produces ℐ with |ℐ|<|𝒥|, V can be modified so as to replace 𝒥 by an appropriate index set with size |ℐ|.) Thus, the resulting decomposition may be written as an equality A_:,𝒥=UA_ℐ,𝒥 with U=P( [ I; E ]) , where P is a permutation matrix and, with 1:m standing for 1,2,…,m, E=A_{1:m}\ℐ,𝒥A_ℐ,𝒥^-1, ‖ E‖_max≤ c. Since A_:,𝒥 is a tall and skinny matrix, we refer to (<ref>) as a skinny rank-revealing (SRR) factorization. (<ref>) and (<ref>) in turn lead to the approximation A≈ UA_ℐ,𝒥V^T. With (<ref>), we may further obtain a CUR approximation like in (<ref>) (with the pseudoinverse replaced by A_ℐ,𝒥^-1). The direct application of strong rank-revealing factorizations to A to obtain (<ref>) is expensive and costs O(rmn). To reduce the cost, we can instead follow the Nyström method and randomly sample columns from A to form A_:,𝒥. However, the accuracy of the resulting approximation based on the forms (<ref>) or (<ref>) may be low. On the other hand, we can view the SRR factorization (<ref>) as a way to quickly choose the representative subset 𝐬 (based on the interaction between 𝐱 and 𝐭 instead of the interaction between 𝐱 and 𝐲). In other words, (<ref>) is a way to quickly perform row pivoting for A so as to select representative rows A_ℐ,: from A. Then we can use the following low-rank approximation: A≈ UA_ℐ,: with U=P( [ I; E ]) , which may be viewed as a potentially refined form over (<ref>) when 𝒥 is randomly selected. (Note that P and E depend on 𝒥.) We would like to gain some insights into the accuracy of approximations based on the Nyström method. There are various earlier studies based on (<ref>). Those in <cit.> are relevant to our result below. When A is positive (semi-)definite, the analysis in <cit.> bounds the errors in terms of the distances between the landmark points and the remaining data points. A similar strategy is also followed in <cit.> for symmetric A. The resulting bound may be very conservative since it is common for some data points in practical data sets to be far away from the landmark points. In addition, the error bounds in <cit.> essentially involve a factor ‖ A_ℐ,𝒥^-1‖_2 (or ‖ A_ℐ,𝒥^+‖_2), which may be too large if high accuracy is desired. This is because the smallest singular value of A_11 may be just slightly larger than a smaller tolerance. Here, we provide a way to understand the approximation error based on (<ref>). It uses the minimization of a slightly overdetermined problem and does not involve ‖ A_ℐ,𝒥^-1‖_2. The following analysis does not aim to precisely quantify the error magnitude (which is hard anyway). Instead, it can serve as a motivation for some strategies in our high-accuracy Nyström methods later. Suppose 𝒥 is a given column index set with |𝒥|=r and (<ref>)–(<ref>) hold. Then the resulting approximation (<ref>) satisfies ‖ A-UA_ℐ,:‖_max≤2c√(r)max_1≤ i≤ m,1≤ j≤ nmin_v∈ℝ^r‖ A_ℐ_i,𝒥v-A_ℐ_i,j‖_2, where ℐ_i=ℐ∪{i} for each 1≤ i≤ m. From (<ref>) and (<ref>), we have, for any 1≤ i≤ m, 1≤ j≤ n, (A-UA_ℐ,:)_ij=(A-A_:,𝒥A_ℐ,𝒥^-1A_ℐ,:)_ij=A_ij-A_i,𝒥A_ℐ,𝒥^-1A_ℐ,j. It is obvious that A_ij-A_i,𝒥A_ℐ,𝒥^-1A_ℐ,j=0 if i∈ℐ. Thus, suppose i∈{1:m}\ℐ. For any v∈ℝ^r, |A_ij-A_i,𝒥A_ℐ,𝒥^-1A_ℐ,j| =|(A_ij-A_i,𝒥v)+(A_i,𝒥v-A_i,𝒥A_ℐ,𝒥^-1A_ℐ,j)| ≤|A_ij-A_i,𝒥v|+‖ A_i,𝒥A_ℐ,𝒥^-1‖_2‖ A_ℐ,𝒥v-A_ℐ,j‖_2 ≤|A_ij-A_i,𝒥v|+c√(r)‖ A_ℐ,𝒥v-A_ℐ,j‖_2, where the last step is because A_i,𝒥A_ℐ,𝒥^-1 is a row of E in (<ref>) and its entries have magnitudes bounded by c. With c≥1, we further have |A_ij-A_i,𝒥A_ℐ,𝒥^-1A_ℐ,j| ≤ c√(r)( |A_i,𝒥v-A_ij|+‖ A_ℐ,𝒥v-A_ℐ,j‖_2) ≤2c√(r)‖ A_ℐ_i,𝒥v-A_ℐ_i,j‖_2. Since this holds for all v∈ ℝ ^r, take the minimum for v to get the desired result. The bound in this lemma can be roughly understood as follows. If A_ℐ_i,j is nearly in the range of A_ℐ_i,𝒥 for all i,j, the bound in (<ref>) would then be very small and we would have found ℐ and 𝒥 that produce an accurate low-rank approximation (<ref>). Otherwise, to further improve the accuracy, it would be necessary to refine ℐ and 𝒥 and possibly include additional i and j indices respectively into ℐ and 𝒥. A heuristic strategy is to progressively pick i and j so that A_ℐ_i,j is as linearly independent from the columns of A_ℐ_i,𝒥 as possible. Motivated by this, we may use a subset refinement process. First, use randomly picked columns A_:,𝒥 to generate a row skeleton and then use the row skeleton to generate a new column skeleton. The new column skeleton suggests which new j should be attached to 𝒥. Next, if a desired accuracy is not reached, then randomly pick more columns to attach to the refined set 𝒥 and start a new round of refinement. Such a process is called progressive alternating direction pivoting (or subset refinement) below. § HIGH-ACCURACY NYSTRÖM SCHEMES In this section, we show how to use the Nyström method to design the high-accuracy Nyström (HAN) schemes that can produce highly accurate low-rank approximations in practice. We begin with the basic idea of the progressive alternating direction pivoting and then show how to perform fast subset update and how to conveniently control the accuracy. §.§ Progressive alternating direction pivoting The direct application of strong rank-revealing factorizations to A has quadratic complexity. One way to save the cost is as follows. Start from some column samples of A like in the usual Nyström method. Use the SRR factorization to select a row skeleton, which can then be used to select a refined column skeleton. The process can be repeated in a recursive way, leading to a fast alternating direction refinement scheme. A similar empirical scheme has been adopted recently in <cit.>. However, when high accuracies are desired, the effectiveness of this scheme may be limited. That is, just like the usual Nyström method, a brute-force increase of the initial sample size may not necessarily improve the approximation accuracy significantly. A high accuracy may require the initial sample size to be overwhelmingly larger than the target numerical rank, which makes the cost too high. Here, we instead adaptively or progressively apply the alternating direction refinement based on step-by-step small increases of the sample size. We use one round of alternating row and column pivoting to refine the subset selections. After this, if a target accuracy τ or numerical rank r is not reached, we include a small number of additional samples to repeat the procedure. The basic framework to find a low-rank approximation to A in (<ref>) is as follows, where the subset 𝒥 is initially an empty set and b≤ r is a small integer as the stepsize in the progressive column sampling. * (Progressive sampling) Randomly choose a column index set 𝒥⊂{1:n}\𝒥 with |𝒥|=b and set 𝒥=𝒥∪𝒥. * (Row pivoting) Apply an SRR factorization to A_:,𝒥 to find a row index set ℐ: A_:,𝒥≈ UA_ℐ,𝒥, where U looks like that in (<ref>). * (Column pivoting) Apply an SRR factorization to A_ℐ,: to find a refined column index set 𝒥: A_ℐ,:≈ A_ℐ,𝒥V^T, where V looks like that in (<ref>). * (Accuracy check) If a desired accuracy, maximum sample size, or a target numerical rank is reached or if ℐ stays the same as in the previous step, return a low-rank approximation to A like the following and exit: Ã=UA_ℐ,:, A_:,𝒥V^T, or UA_ℐ,𝒥V^T. Otherwise, repeat from Step <ref>. (More details on the stopping criteria and fast error estimation will be given in Section <ref>.) This basic HAN scheme (denoted ) is illustrated in Figure <ref>, with more details given in Algorithm <ref>. Note that the key outputs of the SRR factorization (<ref>) are the index set ℐ and the matrix E. (The permutation matrix P is just to bring the index set ℐ to the leading part and does not need to be stored.) For convenience, we denote (<ref>) by the following procedure in Algorithm <ref> (with the parameter c in (<ref>) assumed to be fixed): ℐ,E]←𝖲𝖱𝖱(A_:,𝒥). The scheme may be understood heuristically as follows. Initially, with 𝒥 a random sample from the column indices, it is known that the expectation of the norm of a row of A_:,𝒥 is a multiple of the norm of the corresponding row in A (see, e.g., <cit.>). Thus, the relative magnitudes of the row norms of A can be roughly reflected by those of A_:,𝒥. It then makes sense to use A_:,𝒥 for quick row pivoting (by finding A_ℐ,𝒥 with determinant as large as possible). This strategy shares features similar to the randomized pivoting strategies in <cit.> which are also heuristic and work well in practice, except that the methods in <cit.> need matrix-vector multiplications with costs O(mn). With the resulting row pivot index set ℐ, the scheme further uses the SRR factorization to find a submatrix A_ℐ,𝒥 of A_ℐ,: with determinant as large as possible, which enables to refine the column selection. It may be possible to further improve the index sets through multiple rounds of such refinements like in <cit.>. However, the accuracy gain seems limited, even if a large initial sample size is used (as shown in our test later). Thus, we progressively attach additional samples (in small stepsizes) to the refined subset 𝒥 and then repeat the previous procedure. In practice, this makes a significant difference in reducing the approximation error. In this scheme, the sizes of the index sets ℐ and 𝒥 grow with the progressive sampling. Accordingly, the costs of the SRR factorizations (<ref>)–(<ref>) increase since the SRR factorizations at step i are applied to matrices of sizes m×(ib) or (ib)× n. With the total number of iterations N≈r/b, the total cost (excluding the cost to check the accuracy) is ξ_𝖧𝖠𝖭-𝖡=∑_i=1^NO( (ib)^2(m+n)) =O( r^3/b(m+n)) . With i increases, the iterations advance toward the target numerical rank or accuracy. §.§ Fast subset update via Schur complement sampling In the basic scheme , the complexity count in (<ref>) for the SRR factorizations at step i gets higher with increasing i. To improve the efficiency, we show how to update the index sets so that at step i, the SRR factorization (for the row pivoting step for example) only needs to be applied to a matrix of size (m-(i-1)b)× b instead of m×(ib), followed by some quick postprocessing steps. Suppose we start from a column index set 𝒥=𝒥∪𝒥 as in Step <ref> of the basic HAN scheme above. We would like to avoid applying the SRR factorizations to the full columns A_:,𝒥 in Step <ref> and the full rows A_ℐ,: in Step <ref>. We seek to directly produce an expanded column index set over 𝒥, as illustrated in Figure <ref>. It includes two steps. One is to produce an update ℐ to the row index set ℐ (Figure <ref>(a), which replaces Steps (c)–(d) in Figure <ref>) and the other is to produce an update to the column index set (Figure <ref>(b)). Clearly, we just need to show how to perform the first step. With the row pivoting step like in (<ref>), we can obtain a low-rank approximation of the form (<ref>). Using the row permutation matrix P in (<ref>) (computed in (<ref>)), we may write A as A =P( [ A_11 A_12; A_21 A_22 ]) ≈ P( [ I; E ]) ( [ A_11 A_12 ]) with A_11 =A_ℐ,𝒥, E=A_21A_11^-1. At this point, we have A=P( [ I ; E I ]) ( [ A_11 A_12; S ]) with S=A_22-EA_12, where S is the Schur complement. In the usual strong rank-revealing factorizations like the one in <cit.>, the low-rank approximation is obtained also from a decomposition of the form (<ref>) with S dropped. Here, our fast pivoting scheme is more efficient. Of course, the strong rank-revealing factorization in <cit.> guarantees the quality of the low-rank approximation in the sense that, there exist low-degree polynomials c≥1 and f≥1 in m, n, and k (size of A_11) such that (<ref>) holds and, for 1≤ i≤ k, 1≤ j≤min{m,n}-k, σ_i(A_11)≥σ_i(A)/f, σ_j(S)≤σ_k+j(A)f, ‖ A_11^-1A_12‖_max≤ c, where σ_i(·) denotes the i-th largest singular value of a matrix. Our subset update strategy is via the sampling of the Schur complement S. In fact, when A_ℐ,:=( [ A_11 A_12 ]) is accepted as a reasonable row skeleton, we then continue to find a low-rank approximation to S in (<ref>) so it makes sense to sample S. It is worth noting that the full matrix S is not needed. Instead, only its columns corresponding to A_:,𝒥 are formed. That is, we form S_:,ℒ=(A_22)_:,ℒ-E(A_12)_:,ℒ, where ℒ corresponds to 𝒥 and selects entries from {1:n}\𝒥 in a two-level composition of the index sets as follows: ({1:n}\𝒥)∘ℒ=𝒥. That is, sampling the columns of A with the index set 𝒥 is essentially to sample the columns of S with ℒ. For notational convenience, suppose the columns of A have been permuted so that P^TA=( [ A_11 A_12; A_21 A_22 ]) with A_:,𝒥=[ A_11; A_21 ] . Now, apply an SRR factorization to S_:,ℒ to get S_:,ℒ≈P̂( [ I; Ê ]) S_𝒦,ℒ. Then S≈P̂( [ I; Ê ]) S_𝒦,:. Accordingly, we may write S as S=P̂( [ S_22 S_23; S_32 S_33 ]) , where S_𝒦,:=( [ S_22 S_23 ]). From (<ref>) and (<ref>), S can be further written as S=P̂( [ I ; Ê Ŝ ]) ( [ S_22 S_23; I ]) with Ŝ=S_33-ÊS_23, where Ŝ is a new Schur complement (and is not formed). At this point, we have the following proposition which shows how to expand the row index set ℐ by an update ℐ. A may be factorized as A=P̃( [ I ; Ẽ I ]) ( [ A_ℐ,𝒥 A_ℐ,{1:n}\𝒥; Ŝ ]) , where P̃ is a permutation matrix, ℐ=ℐ∪ℐ with ℐ=({1:m}\ℐ)∘𝒦, and ‖Ẽ‖_max≤ bc^2+c when ‖ E‖_max≤ c, ‖Ê‖_max≤ c, and |𝒥|=b. (<ref>) and (<ref>) lead to A =P( [ I ; E I ]) ( [ A_11 A_12; P̂[ I ; Ê Ŝ ][ S_22 S_23; I ] ]) =P( [ I ; E P̂[ I ; Ê Ŝ ] ]) ( [ A_11 A_12; [ S_22 S_23; I ] ]) =P( [ I ; P̂ ]) ( [ I ; P̂^TE [ I ; Ê Ŝ ] ]) ( [ A_11 A_12; [ S_22 S_23; I ] ]) =P̃( [ I ; E_1 I ; E_2 Ê Ŝ ]) ( [ A_11 Â_12 Â_13; S_22 S_23; I ]) , where P̃=P[ I ; P̂ ], P̂^TE is partitioned as [ E_1; E_2 ] conformably with [ I; Ê ], and A_12 is partitioned as ( [ Â_12 Â_13 ]) with Â_12=A_ℐ,𝒥. We can now factorize the second factor on the far right-hand side of (<ref>) as ( [ I ; E_1 I ; E_2 Ê Ŝ ]) =( [ I ; I ; E̅ Ê Ŝ ]) ( [ I ; E_1 I ; I ]) , where E̅=E_2-ÊE_1. Then, A may be written as A =P̃( [ I ; I ; E̅ Ê Ŝ ]) ( [ I ; E_1 I ; I ]) ( [ A_11 Â_12 Â_13; S_22 S_23; I ]) =P̃( [ I ; I ; E̅ Ê I ]) ( [ A_11 Â_12 Â_13; Â_21 Â_22 Â_23; Ŝ ]) where [ Â_21 Â_22 Â_23 ] =[ 0 S_22 S_23 ] +E_1[ A_11 Â_12 Â_13 ] . The block [ Â_21 Â_22 Â_23 ] essentially corresponds to the rows of A with index set ℐ in (<ref>). This is because of the special form of the second factor on the right-hand side of (<ref>). Then get (<ref>) by letting Ẽ=[ E̅ Ê ] . Now since ‖ E‖_max≤ c, ‖Ê‖_max≤ c, and Ê has column size b, we have ‖E̅‖_max≤ bc^2+c. Accordingly, ‖Ẽ‖_max≤ bc^2+c. This proposition shows that we can get a factorization (<ref>) similar to (<ref>), but with the expanded row skeleton A_ℐ,:. Accordingly, we may then obtain a new approximation to A similar to (<ref>): A≈ŨA_ℐ,: with Ũ=P̃( [ I; Ẽ ]) , To support the reliability of such an approximation, we can use the following way. As mentioned in Remark <ref>, if (<ref>) is assumed to be obtained by a strong rank-revealing factorization, then we would have nice singular value bounds in (<ref>). Now, if we assume that is the case and (<ref>) is also obtained by a strong rank-revealing factorization, then we would like to show (<ref>) from the subset update would also satisfy some nice singular value bounds. For this purpose, we need the following lemma. If (<ref>) is assumed to satisfy (<ref>) with k=|ℐ| the size of A_11, and (<ref>) is assumed to satisfy, for 1≤ i≤ b, 1≤ j≤min{m,n}-k-b, σ_i(S_22)≥σ_i(S)/f̂, σ_j(Ŝ)≤σ_b+j(S)f̂, where f̂≥1, then μ=σ_k(A_11)/σ_1(S_22) satisfies 1/f^2≤μ≤ sf̂, where s=σ_k(A)/σ_k+1(A). By (<ref>) and the interlacing property of singular values, σ_k(A_11)≥σ_k(A)/f, σ_1(S_22)≤σ_1(S)≤σ_k+1(A)f, which yield μ≥σ_k(A)/f/σ_k+1(A)f≥1/f^2. Similarly, by the interlacing property of singular values and (<ref>), σ_k(A_11)≤σ_k(A), σ_1(S_22)≥σ_1(S)/f̂≥σ_k+1(A)/f̂, where the result σ_1(S)≥σ_k+1(A) directly follows from Weyl's inequality or <cit.>: σ_k+1(A) =σ_k+1( P( [ I; E ]) ( [ A_11 A_12 ]) +P( [ 0 ; S ]) ) ≤σ_k+1( P( [ I; E ]) ( [ A_11 A_12 ]) ) +σ_1( P( [ 0 ; S ]) ) =0+σ_1(S). Then μ≤σ_k(A)/σ_k+1(A)/f̂=sf̂. As a quick note, here s=σ_k(A)/σ_k+1(A) reflects the gap between σ_k(A) and σ_k+1(A). Since we seek to expand the index sets ℐ and 𝒥 (and k hasn't yet reached the target numerical rank r), it is reasonable to regard s as a modest magnitude. Now we are ready to show the singular value bounds. With the assumptions and notation in Lemma <ref>, (<ref>) satisfies, for 1≤ i≤ k+b, 1≤ j≤min{m,n}-k-b, σ_i(A_ℐ,𝒥)≥σ_i(A)/f̃, σ_j(Ŝ)≤σ _k+b+j(A)f̃, where f̃=(1+sf̂+sf̂b^2c^2)f^2f̂. According to (<ref>), A_ℐ,𝒥=[ I ; E_1 I ][ A_11 Â_12; S_22 ]. With a strategy like in <cit.>, rewrite A_ℐ,𝒥 as A_ℐ,𝒥=[ I ; E_1 1/√(μ)I ][ A_11 ; μ S_22 ][ I A_11^-1Â_12; 1/√(μ)I ] , where μ=σ_k(A_11)/σ_1(S_22). Then [ A_11 ; μ S_22 ] =[ I ; -√(μ)E_1 √(μ)I ] A_ℐ,𝒥[ I -√(μ)A_11^-1Â_12; √(μ)I ] By <cit.>, σ_i( [ A_11 ; μ S_22 ]) ≤σ_i(A_ℐ,𝒥)‖[ I ; -√(μ)E_1 √(μ)I ]‖ _2‖[ I -√(μ)A_11^-1Â_12; √(μ)I ]‖ _2 ≤σ_i(A_ℐ,𝒥)√(1+μ+μ‖ E_1‖_2^2)√(1+μ+μ‖ A_11^-1Â_12‖_2^2) ≤σ_i(A_ℐ,𝒥)(1+sf̂+sf̂b^2c^2), where the last inequality is from Lemma <ref> and the fact that E_1 and A_11^-1Â_12 are b× b matrices with entrywise magnitudes bounded by c. Thus, σ_i(A_ℐ,𝒥)≥1/1+sf̂+sf̂b^2c^2σ_i( ( [ A_11 ; μ S_22 ]) ) . Since σ_k(A_11)=σ_1(μ S_22), we get σ_i( [ A_11 ; μ S_22 ]) =σ_i(A_11), 1≤ i≤ k, σ_k+i( [ A_11 ; μ S_22 ]) =σ_i(μ S_22), 1≤ i≤ b. By (<ref>) and Lemma <ref>, σ_i(μ S_22)≥μσ_i(S)/f̂≥1/f^2f̂σ_i(S)≥1/f^2f̂σ_k+i(A), 1≤ i≤ b, where the result σ_i(S)≥σ_k+i(A) again follows from Weyl's inequality or <cit.>. Putting (<ref>) and the first inequality in (<ref>) into (<ref>) to get σ_i(A_ℐ,𝒥)≥1/1+sf̂+sf̂b^2c^21/f^2f̂σ_i(A), 1≤ i≤ k+b. Finally, (<ref>) and (<ref>) yield σ_j(Ŝ)≤σ_b+j(S)f̂≤σ_k+b+j(A)ff̂, 1≤ j≤min{m,n}-k-b. Then taking f̃=(1+sf̂+sf̂b^2c^2)f^2f̂ to get (<ref>). This proposition indicates that, if (<ref>) and (<ref>) are assumed to result from strong rank-revealing factorizations, then (<ref>) as produced by the subset update method would also enjoy nice singular value properties like in a strong rank-revealing factorization. This supports the effectiveness of performing the subset update. Although here we obtain (<ref>) and (<ref>) through the much more economic SRR factorizations coupled with the Nyström method, it would be natural to use subset updates to quickly get the expanded index set ℐ (from the original index set ℐ). The SRR factorizations are only applied to blocks with column sizes b instead of ib in step i. In a nutshell, the subset update process starts from a row skeleton A_ℐ,:, samples the Schur complement S, and produces an expanded row skeleton A_ℐ,: and the basis matrix Ũ in (<ref>). The process is outlined in Algorithm <ref>. Such a subset update strategy can also be applied to expand the column index set 𝒥. That is, when ℐ is expanded into ℐ∪ℐ, we can apply the strategy above with 𝒥 replaced by ℐ, 𝒥 replaced by ℐ, and relevant columns replaced by rows. We then incorporate the subset update strategy into the basic HAN scheme. There are two ways to do so with different performance (see Algorithm <ref>). * : This is an HAN scheme with fast updates for both the row subsets and the column subsets. Thus, both the index sets ℐ and 𝒥 are expanded through updates. In this scheme, |ℐ| and |𝒥| are each advanced by stepsize b in every iteration step. * : This is an HAN scheme with aggressive updates where only, say, the column index set 𝒥 is updated. The row index set ℐ is still updated via the usual SRR pivoting applied to A_:,𝒥 (line <ref> of Algorithm <ref>). This scheme potentially expands the index sets much more aggressively. The reason is as follows. The SRR factorization of A_:,𝒥 may update ℐ to a very different set and the set difference ℐ (line <ref> of Algorithm <ref>) may have size comparable to |𝒥|. Then, the column subset update is applied based on A_ℐ,: as in line <ref> of Algorithm <ref> and can potentially increase the size of 𝒥 by |ℐ|. If Algorithm <ref> is applied at the ith iteration of Algorithm <ref> as in line <ref>, the main costs are as follows. * The formation of S_:,ℒ costs O( (m-(i-1)b)(i-1)b^2) +O( (m-(i-1)b)b) . * The SRR factorization of S_:,ℒ in (<ref>) costs O( b^2(m-(i-1)b)) . * The computation of (<ref>) costs O( (m-ib)(i-1)b^2) +O( (m-ib)(i-1)b) . These costs add up to O(ib(2bm+m+2b^2)), where some low-order terms are dropped and b is assumed to be a small fixed stepsize. The scheme applies Algorithm <ref> to both the row and the column subset updates. Accordingly, with N≈r/b iterations, the total cost of the scheme is ξ_𝖧𝖠𝖭-𝖴=∑_i=1^NO(ib(2b(m+n)+m+n+4b^2))=O( r^2(m+n)) , which is a significant reduction over the cost in (<ref>). The cost of the scheme depends on how many iteration steps are involved and on how aggressive the index sets advance. In the most aggressive case, suppose at each step the updated index set ℐ (or 𝒥) doubles the size from the previous step, then it only needs Ñ≈log_2r/b steps. Accordingly, the cost is ξ_𝖧𝖠𝖭-𝖠=∑_i=1^ÑO( (2^i-1b)^2(m+n)) =O(r^2(m+n)), which is comparable to ξ_𝖧𝖠𝖭-𝖴. Moreover, in such a case, would only need about blog_2r/b column samples instead of about r samples, which makes it possible to find a low-rank approximation with a total sample size much smaller than r. This has been observed frequently in numerical tests (see Section <ref>). §.§ Stopping criteria and adaptive accuracy control The HAN schemes output both ℐ and 𝒥 so we may use UA_ℐ,:, A_:,𝒥V^T, or UA_ℐ,𝒥V^T as the output low-rank approximation, where V and U look like those in (<ref>) and (<ref>), respectively. Based on the differences of the schemes, we use the following choice which works well in practice: Ã={[ A_:,𝒥V^T, or ,; UA_ℐ,:, . ]. The reason is as follows. A_:,𝒥V^T is the output from the end of the iteration and is generally a good choice. On the other hand, since obtains U from a full strong rank-revealing factorization step which potentially gives better accuracy, so UA_ℐ,: is used for . The following stopping criteria may be used in the iterations. * The iterations stop when a maximum sample size or a target numerical rank is reached. The numerical rank is reflected by |ℐ| or |𝒥|, depending on the output low-rank form in (<ref>). * In and , the iteration stops when ℐ stays the same as in the previous step. * Another criterion is when the approximation error is smaller than τ. It is generally expensive to directly evaluate the error. There are various ways to estimate it. For example, in and , we may use the following bound based on (<ref>) and (<ref>): ‖ A-Ã‖_2=‖ S‖_2≈‖[ I; Ê ][ S_22 S_23 ]‖ _2≤‖[ I; Ê ]‖ _2‖[ S_22 S_23 ]‖ _2. (Note the approximations to A and S are obtained by randomization.) We may also directly estimate the absolute or relative approximation errors without the need to evaluate ‖ A-Ã‖_2 or ‖ A‖_2. In the following, we give more details. The following lemmas suggest how to estimate the absolute and relative errors. Suppose k=|𝒥| for the column index set 𝒥 of A_11 in (<ref>) and ℒ is given in (<ref>) with 𝒥 from a uniform random sampling of {1:n}\𝒥 and |𝒥|=b. Let θ=n-k/b‖ S_:,ℒ‖_F^2 and ℰ=A-UA_ℐ,:. Then E(θ)=‖ℰ‖_F^2. If (<ref>) is further assumed to satisfy (<ref>), then ( |θ-‖ℰ‖_F^2|≥((n-k)f^2)(σ_k+1(A))^2) ≤2exp( -2b) . From (<ref>), ‖ℰ‖_F=‖ S‖_F. S has size (m-k)×(n-k). S_:,ℒ essentially results from the uniform sampling of the columns of S with ℒ in (<ref>). Let C be the submatrix formed by the k columns of the order-(n-k) identity matrix corresponding to the column index set ℒ. Then S_:,ℒ =SC, E(‖ S_:,ℒ‖_F^2) =E(trace(S_:,ℒ^TS_:,ℒ))=E(trace(C^TS^TSC)) =b/n-ktrace(S^TS)=b/n-k‖ S‖ _F^2, where the equality from the first line to the second directly comes by the definition of expectations and is a trace estimation result in <cit.>. This gives (<ref>). If (<ref>) is further assumed to satisfy (<ref>), then ‖ S_:,j‖_2≤‖ S‖_2≤ fσ_k+1(A). The probability result can be obtained like in <cit.> by writing ‖ S_:,ℒ‖_F^2 as the sum of b squares ‖ S_:,j‖_2^2 and applying Hoeffding's inequality: ( |θ-‖ℰ‖_F^2|≥ε) ≤2exp( -2bε^2/(n-k)^2max_j‖ S_:,j‖_2^4) ≤2exp( -2bε^2/(n-k)^2( fσ _k+1(A)) ^4) . Setting ε=(n-k)( fσ_k+1(A)) ^2 to get the result. The probability result indicates that, even with small b, θ is a very accurate estimator for ‖ℰ‖_F^2 (provided that (<ref>) holds). We can further consider the estimation of the relative error. With the assumptions and notation in Lemma <ref>, H=n-k/bS_:,ℒS_:,ℒ^T satisfies ‖ℰ‖_2/‖ A‖_2≤√(‖E(H)‖_2)/‖ A_11‖_2≤ f^2σ_k+1(A)/‖ A‖_2. With (<ref>), E(S_:,ℒS_:,ℒ^T)=E(SCC^TS^T)=S[E(CC^T)]S^T=b/n-kSS^T, where E(CC^T)=b/n-rI is simply by the definition of expectations and has been explored in, say, <cit.>. This leads to √(‖E(H)‖_2)=‖ S‖_2=‖ℰ‖_2, which, together with ‖ A_11‖_2≤‖ A‖_2, yields the first inequality in (<ref>). The second inequality in (<ref>) is based on (<ref>): √(‖E(H)‖_2)/‖ A_11‖_2=‖ S‖_2/‖ A_11‖_2≤fσ_k+1(A)/‖ A‖_2/f=f^2σ_k+1(A)/‖ A‖_2. From these lemmas, we can see that the absolute or relative errors in the low-rank approximation may be estimated by using S_:,ℒ and A_11. For example, a reasonable estimator for the relative error of the low-rank approximation à is given by ϕ=√(n-k/b)‖ S_:,ℒ‖_2/‖ A_11‖_2(≈‖ A-Ã‖_2/‖ A‖_2). This estimator can be quickly evaluated and only costs O(b(m-k)+b^2+k^2). The cost may be further reduced to O(b^2+k^2) by using √(n-k/b)‖ S_𝒦,ℒ‖_2/‖ A_11‖_2 since S_𝒦,ℒ results from a strong rank-revealing factorization applied to S_:,ℒ and there is a low-degree polynomial g in m-k and b such that ‖ S_:,ℒ‖_2/g≤‖ S_𝒦,ℒ‖_2≤‖ S_:,ℒ‖_2. To enhance the reliability, we may stop the iteration if the estimators return errors smaller than a threshold consecutively for multiple steps. § NUMERICAL TESTS We now illustrate the performance of the HAN schemes and compare with some other Nyström-based schemes. The following methods will be tested: * , , : the HAN schemes as in Algorithms <ref> and <ref>; * : the traditional Nyström method to produce an approximation like in (<ref>), where both the row index set ℐ and the column index set 𝒥 are uniformly and randomly selected; * : the scheme to find an approximation like in (<ref>) but with ℐ obtained by one pivoting step (<ref>) applied to uniformly and randomly selected A_:,𝒥; * : the scheme that extends by applying several steps of alternating direction refinements to improve ℐ and 𝒥 like in lines <ref>–<ref> of Algorithm <ref>, which corresponds to the iterative cross-approximation scheme in <cit.>. (In , the accuracy typically stops improving after few steps of refinement, so we fix the number of refinement steps to be 10 in the tests.) In the HAN schemes , , and , the stepsize b in the progressive column sampling is set to be b=5. The stopping criteria follow the discussions at the beginning of Section <ref>. Specifically, the iteration stops if the randomized relative error estimate in (<ref>) is smaller than the threshold τ=10^-14, or if the total sample size S (in all progressive sampling steps) reaches a certain maximum, or if the index refinement no longer updates the row index set ℐ. Since the HAN schemes involve randomized error estimation, it is possible for some iterations to stop earlier or later than necessary. Also, does not use the fast subset update strategy in Section <ref>, so an extra step is added to estimate the accuracy with (<ref>). The Nyström-based schemes , , and are directly applied with different given sample sizes S and do not really have a fast accuracy estimation mechanism. In the plots below for the relative approximation errors ‖ A-Ã‖_2/‖ A‖_2, the Nyström and HAN schemes are put together for comparison. However, it is important to distinguish the meanings of the sample sizes S for the two cases along the horizontal axes. For the Nyström schemes, each S is set directly. For the HAN schemes, each S is the total sample size of all sampling steps and is reached progressively through a sequence of steps each of stepsize b. In the three Nyström schemes, the cardinality |ℐ| will be reported as the numerical rank. In the HAN schemes, the numerical rank will be either |ℐ| or |𝒥|, depending on the low-rank form in (<ref>). Since the main applications of the HAN schemes are numerical computations, our tests below focus on two and three dimensional problems, including some discretized meshes and some structured matrix problems. We also include an example related to high-dimensional data sets. The tests are done in Matlab R2019a on a cluster using two 2.60GHz cores and 32GB of memory. First consider some kernel matrices generated by the evaluation of various commonly encountered kernel functions evaluated at two well-separated data points 𝐱 and 𝐲 in two and three dimensions. 𝐱 and 𝐲 are taken from the following four data sets (see Figure <ref>). * : a flower shape curve, where the 𝐱 set is located at a corner and |𝐱|=1018, |𝐲|=13965. * : a 2D finite element mesh extracted from the package MESHPART <cit.>, where the 𝐱 set is surrounded by the points in 𝐲 with |𝐱|=821, |𝐲|=4125. The mesh is from an example in <cit.> that shows the usual Nyström method fails to reach high accuracies for some kernel matrices even with the number of samples near the numerical rank. * : an unstructured 2D mesh (airfoil) from the SuiteSparse matrix collection (http://sparse.tamu.edu), where the 𝐱 and 𝐲 sets are extracted so that 𝐱 has a roughly rectangular shape and |𝐱|=617, |𝐲|=11078. * : A set of 3D data points extract from the package DistMesh <cit.> but with the 𝐲 points randomly perturbed with |𝐱|=717, |𝐲|=6650. The points in the data sets are nonuniformly distributed in general, except in the case where the points are more uniform. The data points in two dimensions are treated as complex numbers. The setup of the 𝐱 and 𝐲 sets has the size of 𝐱 just several times larger than the target numerical rank. This is often the case in the FMM and structured solvers where the corresponding matrix blocks are short and wide off-diagonal blocks that need to be compressed in the hierarchical approximation of a global kernel matrix (see, e.g., <cit.>). We consider several types of kernels as follows: κ(x,y)= 1/x-y, 1/(x-y)^2, 1/|x-y|, √(|x-y|+1), 1/√(|x-y|^2+1), e^-|x-y|, e^-α|x-y|^2, log|x-y|, tan(x· y+1), where α is a parameter. Such kernels are frequently used in the FMM and in structured matrix computations like Toeplitz solutions <cit.> and some structured eigenvalue solvers <cit.>. For data points in three dimensions, |x-y| represents the distance between x and y. For each data set, we apply the methods above to the kernel matrices A as in (<ref>) formed by evaluating some κ(x,y) at 𝐱 and 𝐲. Most of the kernel matrices have modest numerical ranks. The schemes , , and use sample sizes S up to 400 in almost all the tests. The HAN schemes use much smaller sample sizes. and use sample sizes S≤200 for most tests, and uses sample sizes S≤50 for all the cases. For some kernels evaluated at the set , the relative errors ‖ A-Ã‖_2/‖ A‖_2 in one test run are reported in Figure <ref>. With larger S, the error typically gets smaller. However, is only able to reach modest accuracies even if S is quite large. (The error curve nearly stagnate in the first row of Figure <ref> with increasing S.) The accuracy gets better with for some cases. can further improve the accuracy. However, they still cannot get accuracy close to τ=10^-14 and their error curves in the second row of Figure <ref> get stuck around some small rank sizes insufficient to reach high accuracies. In comparison, the HAN schemes usually yield much better accuracies, especially with and . is often less accurate than but is more efficient because of the fast subset update. The most remarkable result is from , which quickly reaches accuracies around 10^-15 after few sampling steps (with small overall sample sizes). The second row of Figure <ref> also includes the scaled singular values σ_i(A)/σ_1(A). We can observe that and particularly produce approximation errors with decay patterns very close to that of SVD. To further confirm the accuracies, we run each scheme 100 times and report the results in Figure <ref>. In general, we observe that the HAN schemes are more accurate, especially and . The direct outcome from is not accurate, but this is likely due to the quality of the V factor in (<ref>). In fact, most other schemes end the iteration with a low-rank approximation in (<ref>) after one row or column pivoting step by an SRR factorization. Thus, if we apply an additional row pivoting step to A_:,𝒥 at the end of so as to generate a new approximation UA_ℐ,: like in (<ref>), then the resulting errors of (called effective errors in Figure <ref>) are close to those of . Similarly, for the other data sets and various different kernel functions, we have test results as given in Figures <ref>–<ref>. The results can be interpreted similarly. For some cases, , , and even may be quite inaccurate. One example is for κ(x,y)=e^-16|x-y|^2 in Figures <ref> and <ref>, where even becomes quite unreliable and demonstrates oscillatory errors for different S and different tests. The aggressive rank advancement also makes very efficient. For each data set, the average timing of and from 100 runs is shown in Table <ref>. is generally faster than by multiple times. Next, consider a class of implicitly defined kernel matrices with varying sizes. Suppose C is a circulant matrix with eigenvalues being discretized values of a function f(t) at some points in an interval. Such matrices appear in some image processing problems <cit.>, solutions of ODEs and PDEs <cit.>, and spectral methods <cit.>. They are usually multiplied or added to some other matrices so that the circulant structure is destroyed. However, it is shown in <cit.> that they have small off-diagonal numerical ranks for some f(t). Such rank structures are preserved under various matrix operations. The matrix A we consider here is the n× n upper right corner block of C (with half of the size of C). It is also shown in <cit.> that A is the evaluation of an implicit kernel function over certain data points. We consider A with its size n=512,1024,…,16384 so as to demonstrate that can reach high accuracies with nearly linear complexity. For each n, we run for 10 times and report the outcome. As n doubles, Figure <ref>(a) shows the numerical ranks r from , which slowly increase with n. This is consistent with the result in <cit.> where it is shown that the numerical ranks grow as a low-degree power of log n. The low-rank approximation errors are given in Figure <ref>(b) and the average time from the 10 runs for each n is given in Figure <ref>(c). The runtimes roughly follow the O(r^2n) pattern, as explained in Section <ref>. Finally for completeness, we would like to show that the HAN schemes also work for high-dimensional data sets. (We remark that practical data analysis may not necessarily need very high accuracies. However, the HAN schemes can serve as a fast way to convert such data matrices into some rank structured forms that allow quick matrix operations.) We consider kernel matrices resulting from the evaluation of some kernel functions at two data sets and from the UCI Machine Learning Repository (https://archive.ics.uci.edu). The two data sets have 4177 and 13611 points in 8 and 16 dimensions, respectively. Here, each data set is standardized to have mean 0 and variance 1. We take the submatrix of each resulting kernel matrix formed by the first 1000 rows so as to make it rectangular and nonsymmetric. A set of test results is given in Figure <ref>. can only reach modest accuracies around 10^-5. can indeed gets quite good accuracies. Nevertheless, still reaches high accuracies with a small number of sampling steps. Similar results are observed with multiple runs. § CONCLUSIONS This work proposes a set of techniques that can make the Nyström method reach high accuracies in practice for kernel matrix low-rank approximations. The usual Nyström method is combined with strong rank-revealing factorizations to serve as a pivoting strategy. The low-rank basis matrices are refined through alternating direction row and column pivoting. This is incorporated into a progressive sampling scheme until a desired accuracy or numerical rank is reached. A fast subset update strategy further leads to improved efficiency and also convenient randomized accuracy control. The design of the resulting HAN schemes is based on some strong heuristics, as supported by some relevant accuracy and singular value analysis. Extensive numerical tests show that the schemes can quickly reach high accuracies, sometimes with quality close to SVDs. The schemes are useful for low-rank approximations related to kernel matrices in many numerical computations. They can also be used in rank-structured methods to accelerate various data analysis tasks. The design of the schemes is fully algebraic and does not require particular information from the kernel or the data sets. It remains open to give statistical or deterministic analysis of the decay of the approximation error in the progressive sampling and refinement steps. We are also attempting a probabilistic study of some steps in the HAN schemes that may be viewed as a randomized rank-revealing factorization. 99 avr11H. Avron and S. Toledo, Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix, J. ACM 58 (2011), Article 8. bai03Z.-Z. Bai and M. K. Ng, Preconditioners for nonsymmetric block toeplitz-like-plus-diagonal linear systems, Numer. Math., 96 (2003), pp. 197–220. bai20Z.-Z. Bai and K.-Y. Lu, On regularized Hermitian splitting iteration methods for solving discretized almost-isotropic spatial fractional diffusion equations, 27, (2020), e2274. beb00M. Bebendorf, Approximation of boundary element matrices, Numer. Math., 86 (2000), pp. 565–589. cai22D. Cai, J. Nagy, and Y. Xi, Fast deterministic approximation of symmetric indefinite kernel matrices with high dimensional datasets, SIAM J. Matrix Anal. Appl., 43 (2022), pp. 1003–1028. cha87T. F. Chan, Rank revealing QR factorizations, Linear Algebra Appl., 88/89 (1987), pp. 67–82. toepS. Chandrasekaran, M. Gu, X. Sun, J. Xia, and J. Zhu, A superfast algorithm for Toeplitz systems of linear equations, SIAM J. Matrix Anal. Appl., 29 (2007), pp. 1247–1266. des06A. Deshpande, L. Rademacher, S. Vempala, and G. Wang, Matrix approximation and projective clustering via volume sampling, Theory Comput., 2 (2006), pp. 225–247. dri06P. Drineas, R. Kannan, and M. W. Mahoney, Fast Monte Carlo algorithms for matrices I: Approximating matrix multiplication, SIAM Journal on Computing 36 (2006), pp. 132–157. dri05P. Drineas and M. W. Mahoney, On the Nyström method for approximating a Gram matrix for improved kernel-based learning, J. Machine Learning, 6 (2005), pp. 2153–2175. dri12P. Drineas, M. W. Mahoney, and D. P. Woodruff, Fast approximation of matrix coherence and statistical leverage, J. Machine Learning, 13 (2012) pp. 3441–3472. meshpartJ. R. Gilbert and S.-H. Teng, MESHPART, A Matlab Mesh Partitioning and Graph Separator Toolbox, http://aton.cerfacs.fr/algor/Softs/MESHPART/. git16A. Gittens and M. W. Mahoney, Revisiting the Nyström method for improved large-scale machine learning, J. Machine Learning, 16 (2016), pp. 1–65. gor01S. A. Goreinov and E. E. Tyrtyshnikov, The maximal-volume concept in approximation by low-rank matrices, in Contemporary Mathematics, vol 280, 2001, pp. 47–52. gre87L. Greengard and V. Rokhlin, A fast algorithm for particle simulations, J. Comput. Phys., 73 (1987), pp. 325–348. gu95M. Gu and S. C. Eisenstat, A divide-and-conquer algorithm for the symmetric tridiagonal eigenproblem, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 79–92. srrqrM. Gu and S. C. Eisenstat, Efficient algorithms for computing a strong rank-revealing QR factorization, SIAM J. Sci. Comput., 17 (1996), pp. 848–869. hac99W. Hackbusch, A sparse matrix arithmetic based on ℋ-matrices, Computing, 62 (1999), pp. 89–108. hal11N. Halko, P.G. Martinsson, and J. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Review, 53 (2011), pp. 217–288. hor91R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge University Press, Cambridge, 1991. ips13I. C. F. Ipsen and T. Wentworth, Sensitivity of leverage scores and coherence for randomized matrix algorithms, Extended abstract, Workshop on Advances in Matrix Functions and Matrix Equations, Manchester, UK, 2013. kum02S. Kumar, M. Mohri, and A. Talwalkar, Sampling methods for the Nyström method, J. Machine Learning, 13 (2012), pp. 981–1006. lua20Q. Luan and V. Y. Pan, CUR LRA at sublinear cost based on volume maximization, In LNCS 11989, Book: Mathematical Aspects of Computer and Information Sciences (MACIS 2019), D. Salmanig et al (Eds.), Springer Nature Switzerland AG 2020, Chapter No: 10, pages 1–17, Springer Nature Switzerland AG 2020 Chapter. mac16T. Mach, L. Reichel, M. Van Barel, and R. Vandebril, Adaptive cross approximation for ill-posed problems, J. Comput. Appl. Math., 303 (2016), pp. 206–217. mar17P. G. Martinsson, G. Quintana-Orti, N. Heavner, and R. van de Geijn, Householder QR factorization with randomization for column pivoting (HQRRP), SIAM J. Sci. Comput., 39 (2017), pp. C96–C115. mir03L. Miranian and M Gu, Strong rank revealing LU factorizations, Linear Alg. Appl., 367 (2003), pp. 1–16. nag97J. Nagy, P. Pauca, R. Plemmons, and T. Torgersen, Space-varying restoration of optical images, J. Opt. Soc. Amer. A, 14 (1997), pp. 3162–3174. superdcX. Ou and J. Xia, SuperDC: Superfast divide-and-conquer eigenvalue decomposition with improved stability for rank-structured matrices, SIAM J. Sci. Comput., 44 (2022), pp. A3041–A3066. pan19V. Y. Pan, Q. Luan, J. Svadlenka, and L. Zhao, CUR low rank approximation of a matrix at sublinear cost, arXiv:1906.04112. distmeshP.-O. Persson, DistMesh - A Simple Mesh Generator in MATLAB, http://persson.berkeley.edu/distmesh. spectral1dJ. Shen, Y. Wang, and J. Xia, Fast structured direct spectral methods for differential equations with variable coefficients, I. The one-dimensional case, SIAM J. Sci. Comput., 38 (2016), pp. A28–A54. tal10A. Talwalkar and A. Rostamizadeh, Matrix coherence and the Nyström method, UAI'10: Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, (2010), pp. 572–579. tro17J. A. Tropp, A. Yurtsever, M. Udell, and V. Cevher, Practical sketching algorithms for low-rank matrix approximation, SIAM J. Matrix Anal. Appl., 38 (2017), pp. 1454–1485. hsseigJ. Vogel, J. Xia, S. Cauley, and V. Balakrishnan, Superfast divide-and-conquer method and perturbation analysis for structured eigenvalue solutions, SIAM J. Sci. Comput., 38 (2016), pp. A1358–A1382. wil01C. Williams and M. Seeger, Using the Nyström method to speed up kernel machines, Advances in Neural Information Processing Systems 13, (2001), pp. 682–688. mfhssrsJ. Xia, Randomized sparse direct solvers, SIAM J. Matrix Anal. Appl., 34 (2013), pp. 197–227. mhsJ. Xia, Multi-layer hierarchical structures, CSIAM Trans. Appl. Math., 2 (2021), pp. 263–296. fasthssJ. Xia, S. Chandrasekaran, M. Gu, and X. S. Li, Fast algorithms for hierarchically semiseparable matrices, Numer. Linear Algebra Appl., 17 (2010), pp.  953–976. circJ. Xia and M. Lepilov, Why are many circulant matrices rank structured? Preprint. xia17J. Xiao, M. Gu, and J. Langou, Fast parallel randomized QR with column pivoting algorithms for reliable low-rank matrix approximations, 24th IEEE International Conference on High Performance Computing, Data, and Analytics (HIPC), Jaipur, India, 2017. kercomprX. Ye, J. Xia, and L. Ying, Analytical low-rank compression via proxy point selection, SIAM J. Matrix Anal. Appl., 41 (2020), pp. 1059–1085. zha08K. Zhang, I. W. Tsang, and J. T. Kwok, Improved Nyström low-rank approximation and error analysis, Proceedings of the 25th international conference on Machine learning, (2008), pp. 1232–1239.
http://arxiv.org/abs/2307.04331v1
20230710035458
On the Jets Induced by a Cavitation Bubble Near a Cylinder
[ "Yuxin Gou", "Junrong Zhang", "Akihito Kiyama", "Zhao Pan" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Quasicrystalline second-order topological semimetals Dong-Hui Xu August 12, 2023 ==================================================== The dynamics of cavitation bubbles in the vicinity of a solid cylinder or fibre are seen in water treatment, demolition and/or cleaning of composite materials, as well as bio-medical scenarios such as ultrasound-induced bubbles near the tubular structures in the body. When the bubble collapses near the surface, violent fluid jets may be generated. Understanding whether these jets occur and predicting their directions—departing or approaching the solid surface—is crucial for assessing their potential impact on the solid phase. However, the criteria for classifying the onset and directions of the jets created by cavitation near a curved surface of a cylinder have not been established. In this research, we present models to predict the occurrence and directions of the jet in such scenarios. The onset criteria and the direction(s) of the jets are dictated by the bubble stand-off distance and the cylinder diameter. Our models are validated by comprehensive experiments. The results not only predict the jetting behaviour but can serve as guidelines for designing and controlling the jets when a cavitation bubble collapses near a cylinder, whether for protective or destructive purposes. § INTRODUCTION Cavitation is a phase transition process from liquid to gas, which is often observed when the pressure of the liquid experiences a significant drop within a short time. The collapse and rebound of the bubble may generate shock waves, extreme heating, and high-velocity jets, resulting in damage to the solid boundaries nearby. This process is detrimental in many scenarios, such as cavitation erosion to hydraulic machinery and destruction of human tissues (e.g., bone or brain, <cit.>). On the other hand, some applications such as biomedical ultrasound and ultrasonic cavitation cleaning <cit.> take advantage of the force acting on the boundary. Hence, the cavitation dynamics near the boundaries have been of interest to the community. Studies on bubble dynamics near a wall and associated damaging mechanisms can be traced back to 1940's <cit.>, focusing on the cavitation phenomena near a flat surface (see, for example, <cit.>, and an illustration in figure. <ref>(a)). When a bubble collapses near a flat solid wall, the bubble may migrate to the wall, and a directional liquid jet towards the wall is created. The concentrated momentum impacts a small area on the wall, where the induced pressure and shear are considered to be one of the primary mechanisms for cleaning and/or damaging the surfaces <cit.>. Therefore, the onset of the directional jet is the key factor determining the interaction between the bubble and the boundary. The direction of the jet depends on a multitude of factors, especially the geometry of the boundaries. <cit.> experimentally studied the direction of the jet generated upon the rebound of a bubble in a corner of two solid boundaries, where the angle between them was set to either 90  or less (figure. <ref>(b & c)). <cit.> proposed a generalized formula that predicts the jet direction in a corner with an arbitrary opening angle α and proximity to the walls (figure. <ref>(d)). They show that there exist analytic solutions that predict the jet direction for α = π/n, where n is a natural number. Several studies reported that the fluid jet formed upon the bubble collapse near a solid wall with complex geometry does not always point to the wall. <cit.> reported the dynamics of the bubbles near trapezoidal ridges and valleys (figure. <ref>(e)) and found that the fluid jet can appear in two different directions (i.e., a departing or approaching jet to the wall). The departing jet may appear when a bubble collapses near the ridge, while a bubble near the valley can only form an approaching jet in their experiments. The configuration might share some similarity to the bubble dynamic near a curved surface (e.g., the surface of a cylinder or a sphere, see figure <ref>(f & g)). The morphology of the bubble in the neighbourhood of a curved surface has been studied <cit.>, and the curvature of the solid wall was found to be one of the primary parameters in addition to the stand-off distance <cit.>. A departing jet may appear when the bubble collapses near a convex (positive curvature) surface. However, extensive data or detailed discussions on the direction of the dual fluid jets were not reported. An interesting feature of the bubble near a convex surface is the “mushroom” bubble before collapsing, which is almost always associated with the departing jet. This observation has been reported in earlier studies (e.g., <cit.>) and recent research on cavitation near the tip of a thin cylinder also concurred with similar evidence. <cit.> reported that the mushroom-shaped collapsing bubble could happen when a cavitation bubble was initiated near the tip of a thin cylinder (figure. <ref>(h)). The fluid-gas interface resembling the `stem of the mushroom' (i.e., the interfaces close to the tip of the cylinder) contracts faster than the `mushroom cap', which results in a departing jet when the bubble fully collapses. <cit.> also suggested that an optimal length scale of cylinder thickness exists, compared to a fixed bubble diameter, so that the jet becomes the most powerful. <cit.> numerically approached this problem and revealed that the mushroom-shaped bubble near the tip of the cylinder might be linked to the reduction of the impact load on the surface. It is perhaps because the not-yet-formed departing jet carries momentum away from the solid surface. Beyond the distinct physics, this setup of bubbles near the tip of a thin cylinder can generate a high-speed departing jet (up to O(1000) m/s according to the simulations by <cit.>) and is of interest to applied research. However, the direction of the jets and the criteria of the departing jet onset were not analyzed. In the current work, we are interested in the dynamics of bubbles and jets next to the side surface of cylinders. To the best of our knowledge, this scenario has not been reported except for <cit.> studying the micro-bubbles near a fiber, as well as <cit.> where bubble behaviour near a thick cylinder (inspired by cavitation near the hull of a ship) was investigated. There are no detailed discussions on the direction of the jet(s) when the bubble collapses near a cylinder available in the current literature. In this paper, we report a regime diagram, validated by vast experimental data, that classifies the onset and the direction of the jet(s), which is dictated by two non-dimensional parameters (i.e., bubble stand-off distance and the cylinder thickness relative to the bubble diameter). Particularly, we find that when a large bubble is close to a thin cylinder, a departing jet is likely to form after collapsing and the cylinder is protected. This discovery might be insightful for some applied scenarios. For example, fibrous or tubular structures in the vicinity of a cavitation bubble could be free from severe damage and it is possible to design patterned surface <cit.> or fibrous structure to reduce cavitation erosion. § EXPERIMENTAL SETUP The experimental setup is shown in figure <ref>(a). The cavitation bubbles were generated by shorting adjustable direct current voltage carried by two thin wires of 0.14 mm in diameter. The sizes of the bubbles varied from 5.45 to 24.58 mm in diameter by adjusting the voltage (within the range of 60 – 120 V). The cylinders used in the experiments are made from stainless steel with a contact angle of around 60^∘. The wires are at least one order of magnitude thinner compared to the size of the cylinders and the cavitation bubbles and thus the influence of the wires is negligible. The wires and the cylinder were placed in the middle of a tank (20 × 20 × 20 cm^3) filled with degassed tap water. The tank is large enough to ensure the bubble behaviour was not affected by either the free surface or the rigid wall. The dynamics of the cavitation bubbles was filmed by a high-speed camera (FASTCAM SA-Z or NOVA S20, Photron, Tokyo, Japan) at 60,000 frames per second. A schematic of the bubble and the cylinder overlaid on a high-speed image is shown in figure <ref>(b). Two key non-dimensional parameters—the standoff distance γ and the non-dimensional cylinder diameter η—are defined as γ =d_s/D_0  and  η = D/D_0, respectively, where d_s is the distance from the spark location, which can be considered as the nominal center of the bubble, to the closest cylinder surface, D_0 is the maximum bubble diameter (marked by a blue circle), and D is the cylinder diameter (marked by a red circle). The distance between the nominal center of the bubble and the center line of the cylinder is written as d = d_s + D/2, which can be normalized by D_0 as ζ = d/D_0 = γ +η/2. This is an alternative non-dimensional length scale characterizing the distance between the bubble and the cylinder. § RESULTS We carried out comprehensive experiments on spark-induced cavitation bubbles in the vicinity of a cylinder by varying η and γ. The experiments revealed five distinct bubble behaviours for various conditions (demonstrated in figure <ref>). The dimensional and non-dimensional parameters of these typical cases are listed in Table <ref>. When the bubble is initiated far enough from the surface of a cylinder, it is expected that the bubble remains spherical when expanding and collapsing, and no jets are formed after the bubble collapses. We refer to this observation as a “no jet (NJ)” case hereafter. For example, in figure <ref>(a), a bubble is initiated by a spark (indicated by the apex of the green triangle at t=0 ms) at γ=1.44 from a cylinder (marked by the scarlet circle). The bubble grows and reaches its maximum diameter D_0 at t = 0.46 ms, collapses at t = 0.87 ms for the first time, and rebounds to the maximum of the cloud at t=1.03 ms. The direct observation of the jets (onset and directions) during collapse can be difficult, thus we use the displacement (δ_D) from the bubble onset location (marked by the green triangle in figure <ref>) to the centroid of the maximum bubble cloud of the second expansion (marked by the yellow triangle) as an indicator of the net momentum due to the bubble collapse. The positive direction of δ_D points from the centerline of the cylinder to the center of the bubble). A non-zero δ_D infers a liquid jet generated when the bubble collapses. The non-dimensional displacement, δ = δ_D/D_0, in figure <ref>(a) was δ = 0.00 (note that NJ is classified for |δ|<δ_0, δ_0 = 0.03 is a small value as the measurement threshold in this work.) As the center of the bubble moves closer to the cylinder, a jet shooting toward the cylinder is generated when the bubble collapses and we address this case as “approaching jet only (AJO)". As shown in figure <ref>(b) as an example, the bottom of the bubble is deformed when approaching the cylinder from a standoff distance of γ = 0.45 (e.g., see two frames at t = 0.40 and 0.96 ms). The centroid of the rebound bubble (marked by the yellow triangle at t = 2.24 ms) moves towards the cylinder (δ=-0.12 in this case), compared to the spark location (marked by the green triangle at t = 0 ms). This footprint indicates a liquid jet approaching the cylinder is generated during the bubble rebound. In addition, no other jet(s) were observed. The bubble cloud formed during the second expansion cycle collapses and largely covers the cylinder (t = 2.72 ms), implying that the approaching jet may carry a large momentum. This process that generates an approaching jet is similar to a bubble collapsing near a flat rigid surface. Figure <ref>(c) presents a typical case where the mushroom bubble forms and a departing jet starts to appear. In this work, we refer to this scenario as “departing jet emerging (DJE)”. The stand-off distance γ = 0.26 and the non-dimensional cylinder size η=0.09 in this case were smaller than those of the case in figure <ref>(b). In figure <ref>(c), when the bubble reaches its maximum volume (at t = 1.09 ms), the bubble partially warps the narrow cylinder and maintains its spherical shape in general. The stem of the “mushroom" is formed due to the fast-retracting liquid jets pinching the bubble near the cylinder (indicated by the orange arrowheads at t = 2.02 ms). While collapsing, the cap of the mushroom remains spherical as the gas-liquid interface (indicated by the purple arrowhead at t =2.02 ms) is far away from the cylinder and recedes slower compared to the pinching jets. The dynamics are similar to the observations made by <cit.>. It is noteworthy that the bubble cloud in the second expansion cycle moves in two directions. The centroid of the rebound bubble moves toward the cylinder (δ= -0.05, comparing the location of the green and yellow triangles at t =  0 and 2.58 ms, respectively), similar to the case in figure <ref>(b), while there is a minor cloud bubble shooting away from the cylinder (see t = 2.58 ms, marked by the short pink arrowhead in figure <ref>(c)). This observation indicates that two jets exist after the collapse: one jet is approaching and the other one is departing from the cylinder. The departing jet, which is an emerging feature compared to the case in figure <ref>(b), however, does not yet dominate the entire jetting process. When the bubble is close to a relatively thin cylinder, the departing jet may dominate over the approaching jet and we denote this scenario as “departing jet dominant (DJD)”. A typical case is shown in figure <ref>(d) for γ = 0.06 and η = 0.09. The bubble completely wraps the cylinder when it expands to the maximum diameter (t= 1.15 ms) and then collapses. Similar to the case shown in figure <ref>(c), the elongated rebound bubble cloud covering the cylinder meanwhile moving away from the (t=2.53 ms) indicates the existence of both approaching and departing jets. Noting centroid of the bubble cloud (t= 2.53 ms, marked by the yellow triangle) is further away from the cylinder than the center of bubble onset (green triangle at t= 0 ms) and the corresponding displacement δ = +0.04, we argue that the jet forming at collapse is mainly departing. Figure <ref>(e) shows another “no-jet (NJ)” case. A bubble is initiated right next to a thin cylinder, where the size of the bubble is much larger than that of the cylinder (η = 0.05). The bubble behaviour in this case is similar to a free bubble. The centroid of the bubble (cloud) does not show any apparent movement upon rebound, indicating that no jet was generated. Despite the NJ outcome that is similar to the case shown in figure <ref>(a), we emphasize that the phenomenon shown in figure <ref>(e) is due to vanishing cylinder diameter (η→ 0) whereas the NJ case in figure <ref>(a) is associated with the standoff distance in the limit of γ→∞. § MECHANISMS The observations in figure <ref> imply that when a bubble collapses near a cylinder, depending on the relative position as well as the size of the bubble and the cylinder (γ and η), the cylinder may affect the liquid flow in two ways (i.e., blocking and focusing). First, the cylinder can block the liquid behind it from directly moving to the center of the bubble, while the liquid on the other side of the bubble is free to move to fill the cavity during collapsing. This causes a pressure gradient and, in turn, the collapsing bubble generates a jet approaching the cylinder <cit.>. This often happens when the cylinder is relatively large and/or the bubble is not too close to the cylinder (e.g., see the case in figure <ref>(b)). This mechanism is similar to the well-known jet formation from a bubble collapsing next to a solid flat surface. Second, when the cylinder is relatively small and the bubble is initiated close enough to the cylinder, the bubble can be significantly deformed during its growth. In figure <ref>(c), for example, the bubble partially wraps the cylinder while achieving its maximum volume (at t = 1.09 ms), leaving two regions of the gas-liquid interface having a higher curvature than other parts of the bubble. The higher curvature is corresponding to a smaller equivalent local bubble radius, which is associated with a shorter time for a local collapse. This mechanism has also been argued by <cit.> based on the Rayleigh's collapse time, T ≃ 0.915D̃_0√(ρ /p_∞), where T is the collapse time, ρ is the liquid density, p_∞ is the ambient pressure, and D̃_0 is the equivalent bubble size reflecting the local curvature of the bubble. Over the initial stage of the collapse, the advantage of the high-speed flows driven by the high curvature interface accumulates, which results in two jets pinching the bubble (see the orange arrowheads in figure <ref>(c) for instance). The two pinching jets forms the stem of the mushroom-shaped bubble before collapsing. After pinch-off, the two pinching jets merge and the momentum is focused upward, pointing away from the cylinder, which can dominate the retracting liquid near the cap of the mushroom-shaped bubble (see the purple arrowhead in figure <ref>(c)). This focusing mechanism is similar to the shaped charge effect. The competition between these two mechanisms dictates the onset and direction(s) of the jet(s), and some typical results as shown in figure <ref>. § REGIME DIAGRAMS AND VALIDATION Based on the above experimental observations and analysis on the mechanisms, we hypothesize that the direction(s) of the jet(s) caused by the bubble collapsing near a cylinder are dictated by two parameters. One is the standoff distance γ = d_s/D_0 measuring the distance from the bubble to the cylinder, and the other is the non-dimensional cylinder diameter η = D/D_0. Several critical states regarding γ and η are proposed below and illustrated in figure <ref>. When a bubble wraps about half of the cylinder, the virtual circle enclosing the bubble passes the center of the cylinder (see figure <ref>(a)). We conjecture that this is a state separating the blocking and focusing mechanisms and determines if a departing jet would emerge. The corresponding geometric relationship for the circles representing the bubble and cylinder is d_s=1/2(D_0-D), and the non-dimensional form is γ = 1/2 - 1/2η. If the standoff distance is smaller than this threshold, that is to say γ < 1/2 - 1/2η, high curvature on the sufficiently deformed bubble leads to the evident focusing effect and a departing jet is expected. When the bubble is even closer to the cylinder, especially when the bubble is relatively large, the focusing effect is more pronounced than the blocking and the departing jet starts to dominate. This condition translates to d < κ_1 D_0, where κ_1 is a coefficient that can be determined by experimental data (see figure <ref>(b) for illustration). Invoking d = d_s+1/2D, the non-dimensional form of this criterion is γ < κ_1 - 1/2η. When the bubble is far enough from a sufficiently small cylinder, d_s > 1/2D_0 + κ_2 D, where κ_2 is another constant to be determined (see figure <ref>(c)), the effect of the cylinder (blocking or focusing) is negligible and thus no jet is expected. The corresponding non-dimensional form is γ > 1/2 + κ_2 η. This criterion considers the combined effects of the relative size and position of a bubble and cylinder. The asymptotic behaviours (i.e., small η→ 0 and large γ→∞) of such a setup are also of interest. When the cylinder is significantly smaller than the bubble (see figure <ref>(d) for illustration), for example, D < κ_3 D_0 ≪ D_0 with corresponding non-dimensional form η < κ_3 ≪ 1, the relative placement of the bubble and cylinder is not important anymore. Jets are not expected when the bubble collapses due to the diminishing impact of the cylinder of a small length scale. κ_3 ≪ 1 is a small constant that can be found by experiments. When the bubble is too far away from the cylinder (see figure <ref>(e)), the size of the cylinder does not matter. We expect there exists a critical value κ_4 so that if d_s > κ_4 D_0 ≫ D_0/2, no jet would be generated when the bubble collapse. The non-dimensional form of this criterion is γ > κ_4 ≫1/2. Recall (<ref>) again, the above criteria can also be expressed using ζ instead of γ. We use γ to be consistent with the current literature, however, ζ is practical to investigate some of the critical states regarding the directions of the jets. The directions of the jets after bubble collapsing can be qualitatively observed by the direction of the moving bubble cloud in the high-speed videos. For example, when a departing jet appears, the bubble cloud tends to move away from the cylinder over the collapsing-rebound cycles. This can be quantitatively identified using the value of δ = δ_D/D_0 as a measure, which is a characteristic displacement of the bubble cloud. If there is only an approaching jet appears after the first collapse, the momentum of the jet would carry the bubble cloud towards the cylinder (e.g., see figure <ref>(b)) and we expect δ < -δ_0<0. Similarly, when the departing jet dominates the approaching one, δ > +δ_0>0 (see figure <ref>(d) for instance). However, if the departing and approaching jets cannot dominate one to the other, the direction of δ_D and the `sign' of δ are not necessarily determined. We present δ as a function of ζ in figure <ref> to show our argument above is valid. Viewing the δ–ζ phase diagram vertically, we can see that all the AJO cases (orange upside-down triangle in figure <ref>) are located in the region of δ<-δ_0, whereas DJD cases (pink upright triangles) are in δ>+δ_0. NJ cases (black crosses) are distributed along δ = 0 (-δ_0 < δ < +δ_0 to be more specific) whereas the DJE cases (blue diamond symbols) are scattered on both sides of δ = 0. Interrogating the experimental data on the δ – ζ phase diagram ( figure <ref>) horizontally is useful for verifying the aforementioned models and identifying the coefficients such as κ_1. It is visible that the jet direction evolves from approaching to departing as ζ decreases. In the region of ζ > 0.5 (yellow-shaded, to the right of the blue chain line in figure <ref>), almost only AJO cases exist. Recalling (<ref>), ζ= 0.5 is an alternative expression of γ=1/2-1/2η, thus, (<ref>) is validated. The departing jets emerge when ζ < 0.5, and further reducing ζ, the departing jet eventually becomes dominant for ζ < 0.25, which is equivalent to (<ref>) for κ_1 = 1/4. This is supported by observing that in the red-shaded region to the left of the magenta line (corresponding to ζ = 0.25), almost only DJD cases exist. The DJE cases (blue diamond symbols) are located in the transient region for 0.25<ζ < 0.5. The black symbols represent the data extracted from <cit.>, where a laser-induced micro-bubble collapsing near a micro-fibre was studied. This work did not focus on the direction of jets, and the bubble dynamics after the first collapse was not reported. Instead, the location of the bubble near collapsing was recorded. Comparing the displacement from the location of the bubble onset to the center of the bubble at the first collapse, one could still infer the directions of jets. Despite being a different measure of δ than we used for our data, this qualitative classification is sufficient to tell the AJO, DJE, and DJD cases apart in <cit.>, and we see that the experimental data by <cit.> agree with our model. To validate equations (<ref>) and (<ref>), we plot the non-dimensionalized experimental data on the γ – η plane (figure <ref>). The blue chain line indicates equation (<ref>) separating the AJO and the DJE cases. The magenta line in figure <ref> is based on equation (<ref>) that separates most DJD cases from the DJE cases. Experimental data on the γ – η plane also provides quantitative insights into the NJ cases due to different reasons. κ_2=0.5 for (<ref>) separates the NJ cases and the AJO cases for 5×10^-2≲η≲ 7 (see the orange dotted line in figure <ref>). For η < 5×10^-2, a sufficiently thin cylinder cannot affect the dynamics of the bubble and almost no jets were observed in our experiments. Thus, κ_3 = 5×10^-2 in (<ref>) allows our model to establish the criterion of a thin cylinder. For the other extreme, κ_4 = 4 for equation (<ref>) was suggested by our experiments, which is the criterion for a large stand-off distance. We note that κ_4=4 agrees with the established data about a cavitation bubble near a flat surface (<cit.>), which can be considered as a thick cylinder with vanishing curvature (i.e., η→∞). In figure <ref>, criteria based on equations (<ref>) – (<ref>) separate the γ – η phase diagram into four regimes. Regime I (yellow shade) covers most of the AJO cases (orange upside-down triangles). In regime III (pink shade), almost only pink triangles (associated with DJD cases) appear. The transient cases for the directional jet(s) (DJE cases, marked by the blue diamond symbols in blue-shaded regime II) are in between Regimes I and III. Regime IV (different shades of green for three sub-regimes) indicates NJ cases rooted in different mechanisms. In Regime IV-1, NJ happens as a cylinder is too thin (small η). In Regime IV-3, NJ is expected as the bubble is too far away from the solid surface (large γ). Regime IV-2 can be thought of as the transient region between Regime IV-1 and IV-3, where the combined effect of η and γ must be considered and is governed by (<ref>). Again, the data extracted from <cit.> falls in our regime diagram, and provides additional validation based on the interaction of micro-bubbles. § CONCLUDING REMARKS In the current work, we carried out systematic experiments to investigate a cavitation bubble collapsing near a cylinder. We find that the onset and the direction of the jet(s) are dictated by the relative positioning and the size of the bubble and the cylinder (i.e., the standoff distance γ and the normalized cylinder diameter η). When the cylinder is too thin and/or too far away from the bubble, a bubble does not expel any visible jets. Once the bubble starts interacting with the cylinder—when γ and/or η are small enough—a jet approaching the cylinder occurs, as one might expect, which is similar to that for a bubble collapsing in the vicinity of a flat wall. When the cavitation bubble is onset closer to an even smaller cylinder within a particular range, the bubble possesses a mushroom-like collapse followed by a departing jet. Given a certain maximum bubble size, the departing jet carries the energy away from the cylinder, which might result in a reduction of the cavitation-induced damage. In this sense, the cylinder is protected by being thin and staying close to the cavitation. We proposed models to classify these phenomena including transition into four regimes on the γ – η phase diagram, which are validated by experiments. The experimental results and criteria shown in this work may be of interest to applications where cavitation bubbles interact with (thin) cylinders and fibres. For example, a direct implication based on our result is that the demolition of thin fibres and fibrous materials could be challenging, and small bubbles are more effective than bigger ones. When a cylinder near a cavitation bubble needs protection, our regime diagram provides a guideline: one may want to manage the standoff distance and bubble size to avoid the jet onset or staying in the departing jet dominant regime. § ACKNOWLEDGMENTS We thank Drs. S. Peterson and M. Worswick for lending us equipment and J. Beginner and J. Imbert-Boyd for manufacturing and technical support.
http://arxiv.org/abs/2307.04483v1
20230710111105
Towards Hypersemitoric Systems
[ "Tobias Våge Henriksen", "Sonja Hohloch", "Nikolay N. Martynchuk" ]
math.SG
[ "math.SG", "37J35 53D20 70H06" ]
Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging Raziye Kubra Kumrular and Thomas Blumensath R. K. Kumrular and T. Blumensath are with the ISVR Signal Processing and Audio Hearing Group, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: [email protected] ) August 12, 2023 ============================================================================================================================================================================================================================================= This survey gives a short and comprehensive introduction to a class of finite-dimensional integrable systems known as hypersemitoric systems, recently introduced by Hohloch and Palmer in connection with the solution of the problem how to extend Hamiltonian circle actions on symplectic 4-manifolds to integrable systems with `nice' singularities. The quadratic spherical pendulum, the Euler and Lagrange tops (for generic values of the Casimirs), coupled-angular momenta, and the coupled spin oscillator system are all examples of hypersemitoric systems. Hypersemitoric systems are a natural generalization of so-called semitoric systems (introduced by Vũ Ngọc) which in turn generalize toric systems. Speaking in terms of bifurcations, semitoric systems are `toric systems with/after supercritical Hamiltonian-Hopf bifurcations'. Hypersemitoric systems are `semitoric systems with, among others, subcritical Hamiltonian-Hopf bifurcations'. Whereas the symplectic geometry and spectral theory of toric and semitoric sytems is by now very well developed, the theory of hypersemitoric systems is still forming its shape. This short survey introduces the reader to this developing theory by presenting the necessary notions and results as well as its connections to other areas of mathematics and mathematical physics. § INTRODUCTION Integrable Hamiltonian systems play an important role in mathematical and physical sciences. For instance, within celestial mechanics, there is the Kepler problem, and, within quantum mechanics, there is the Jaynes-Cummings model, which are both integrable. Integrable systems are very special dynamical systems exhibiting regular (as opposed to chaotic) behaviour in the sense that there exist a maximal number of (independent, see Definition <ref>) integrals of motion, allowing one to at least in principle integrate the equations of motion. Dynamics of a finite-dimensional integrable Hamiltonian system, defined by means of a proper momentum map (see Definition <ref>), is generically constrained to n-dimensional tori, where n is the number of degrees of freedom. These tori turn out to be Lagrangian submanifolds of the underlying symplectic manifold on which the Hamiltonian system is defined, and thus an integrable system can be seen as a singular Lagrangian torus fibration over a certain subset of ^n, see in particular the papers by Mineur <cit.>, Arnol'd <cit.>, Weinstein <cit.> and Duistermaat <cit.>. This motivates one to study integrable systems using techniques from symplectic geometry. The singular fibres of these singular Lagrangian torus fibrations reflect a non-trivial geometric or dynamical property of the underlying integrable system. The most prominent examples being the monodromy around a focus-focus point and bifurcations of Liouville tori, which we will address below. In the context of symplectic classification of integrable systems it is known how to classify a number of different types of such (`typical') singularities: a saddle singularity (in one degree of freedom) by Dufour, Molino, and Toulet <cit.>, an elliptic singularity (in any dimension) by Eliasson <cit.>, a focus-focus singularity (in dimension 2) by <cit.>, and a parabolic singularity by Bolsinov, Guglielmi, and Kudryavtseva <cit.> and Kudryavtseva and Martynchuk <cit.>. See also the recent breakthrough results concerning symplectic classification in the real-analytic category by Kudryavtseva <cit.> and by Kudryavtseva and Oshemkov <cit.>. In the context of global classification of integrable systems, Pelayo and Vũ Ngọc <cit.> showed that a large class of physically important systems known as semitoric systems are classified by a set of 5 invariants. This is one of the few known explicit results in the global symplectic classification of integrable systems, apart from the classical Delzant's <cit.> construction and the work of Zung <cit.> relating the semi-local (i.e. in a neighbourhood of a singular fibre) and global classification problems. We refer to Sections <ref> and <ref> for more details on semitoric systems. What is currently missing in the literature is a detailed discussion of systems beyond semitoric type: Whereas the topological classification of such systems is a well developed theory going back to Duistermaat and Fomenko and Zieschang (see e.g. Bolsinov and Fomenko <cit.> and the references therein), a more refined (e.g. symplectic) analysis is currently an open problem for in fact the majority of such systems. In particular, what is missing is a detailed analysis of a generalisation of semitoric systems additionally allowing hyperbolic-regular, hyperbolic-elliptic, and parabolic points, known as hypersemitoric systems. The latter class was introduced by Hohloch and Palmer <cit.> in connection with the problem of extending Hamiltonian circle actions on symplectic 4-manifolds to integrable systems, which they solved within this class of systems, see Hohloch and Palmer <cit.> for details. Hypersemitoric systems thus present a challenging platform for the further study by both geometers and analysists and this survey is devised as a quick introduction. Nevertheless, note that the class of hypersemitoric systems does not include all possible singularities that may arise in 4-dimensional integrable systems: the underlying global S^1-action prevents the existence of hyperbolic-hyperbolic singularities; moreover, the definition of hypersemitoric systems excludes most of the `typical' degenerate S^1-invariant singularities, see Kalashnikov's <cit.> list. There exists another class of integrable systems, namely hyperbolic semitoric systems (cf. <cit.>), which, if one considers the union with semitoric systems, contains hypersemitoric system, see Remark <ref>. The hyperbolic semitoric systems do include all `typical' degenerate S^1-invariant singularities in Kalashnikov's <cit.> list. §.§ Organization of the paper The rest of this paper is organized as follows: In Section <ref>, we give the definition of (Liouville) integrability, before defining toric, semitoric, and hypersemitoric systems. Moreover, we explain some important properties of integrable systems and give a short survey over the theory of atoms and molecules. In Section <ref>, we discuss semitoric systems in detail, i.e., their symplectic classification in terms of five invariants and how one may obtain a semitoric system from a toric one. Eventually, we recall some important examples. In Section <ref>, we consider hypersemitoric systems: we first discuss flaps and pleats, which occur in the momentum image of hypersemitoric systems. Then we consider how one may obtain hypersemitoric systems from (semi)toric systems before we briefly explain an explicit example. §.§ Acknowledgements The authors are very grateful to Álvaro Pelayo and San Vũ Ngọc for useful comments and suggestions that helped to improve the original version of this work. The first author was fully supported by the Double Doctorate Funding of the Faculty of Science and Engineering of the University of Groningen. Moreover, all authors were partially supported by the FNRS-FWO Excellence of Science (EoS) project `Symplectic Techniques in Differential Geometry' G0H4518N. § DEFINITIONS, CONVENTIONS, AND BACKGROUND In this section, we give an outline of integrability with an emphasis on integrable systems defined on 4-manifolds and admitting a global effective Hamiltonian circle action. Hypersemitoric systems are a certain class of systems of this type. We start by recalling the classical Arnol'd-Liouville-Mineur theorem, and then move from toric to semitoric to hypersemitoric systems. We also show how the theory relates to the general frameworks of monodromy and bifurcations of Liouville tori, i.e., Fomenko-Zieschang theory. §.§ Integrable systems Let (M, ω) be a symplectic manifold of dimension 2n. Since the symplectic form is non-degenerate, for any function f ∈ C^∞(M,), there exists a unique vector field X_f, called the Hamiltonian vector field of f, such that ι_X_fω = - df. The function f is called the Hamiltonian, and ż = X_f(z) is called a Hamiltonian system, sometimes briefly denoted by X_f. For two Hamiltonians f,g ∈ C^∞(M,), the Poisson bracket is defined by {f, g} := ω(X_f, X_g). If {f, g} = 0, then f and g are said to Poisson commute. Note that {f, g} = X_f(g). If f and g Poisson commute, then g is called a (first) integral of X_f. A Hamiltonian system X_H on a 2n-dimensional symplectic manifold (M, ω) is said to be completely integrable (or briefly integrable) if there exist n functionally independent integrals f_1 := H, f_2,…,f_n of X_H, i.e. their gradients are almost everywhere linearly independent on M, the integrals all Poisson commute with each other, and the flows of X_f_1, …, X_f_n are complete. A shorter notation is (M, ω, F=(f_1,…,f_n)) and F is often referred to as the momentum or integral map of the system. A point p∈ M is regular if the rank of DF_p is maximal and singular otherwise. A value of F is regular if all points in the preimage are regular, and singular otherwise. Similarly, one defines what it means for a fibre F^-1(r) of F to be regular, resp., singular and for a leaf of F, i.e. a connected component of a fibre, to be regular, resp. singular. The Arnol'd-Liouville-Mineur theorem <cit.> describes the regular leaves of the foliation generated by the momentum map of a 2n-dimensional integrable system. Each regular leaf is a Lagrangian submanifold, and if the leaf is connected and compact, then it is diffeomorphic to an n-torus T^n. Such a foliation will be called a Lagrangian torus fibration. Let r ∈^n be a regular value for the momentum mapping F, and let F^-1(r) be a connected and compact fibre, and hence diffeomorphic to T^n, and let U be a tubular neighbourhood of F^-1(r). The Arnol'd-Liouville-Mineur theorem also tells us that U is diffeomorphic to V × T^n, where V is an open set of ^n. On V × T^n, there exists coordinates I_1, …, I_n, ϕ_1, …, ϕ_n, called action-angle coordinates. Here each I_i for i = 1, …, n is a function of the f_i's, whilst each ϕ_i is a standard angle coordinate on T^n. In action-angle coordinates, the symplectic form becomes ω = ∑ dϕ_i∧ dI_i. Note that, in general, action-angle coordinates only exist locally. Duistermaat <cit.> showed that there can exist obstructions to the global existence of action-angle coordinates in terms of the (Hamiltonian) monodromy and the Chern class on the topological level as well as the Lagrangian class on the symplectic level. For us, monodromy will play an essential role so that we will recall its definition here; for more detail see <cit.>. Let F : M → B be a Lagrangian torus fibration over an n-dimensional manifold B and denote by R ⊆ B the set of the regular values of F. Then there exists a natural covering ⋃_r ∈ R_1(F^-1(r)) → R, where _1(F^-1(r)) is the first homology group of F^-1(r) with integer coefficients. Because of this, there is a natural representation of π_1(R) into the group SL(n, ) of automorphisms of the lattice _1(F^-1(r)) ≃ℤ^n. This representation is called the Hamiltonian monodromy of F : M → B (or of F : M → R). Thus, to any loop γ in R, one can assign an n× n integer matrix called the monodromy or the monodromy matrix along γ. Note that Lagrangian torus fibrations are allowed to have singular points and these are precisely the points that encode essential properties of the underlying integrable system. One has in particular been interested in non-degenerate singular points, i.e. points for which the Hessians of the integrals span a Cartan subalgebra in the real symplectic Lie algebra sp(2n, ) (cf. Bolsinov and Fomenko <cit.>). Locally one can describe such singularities by local normal forms (cf., among other, the works by Eliasson <cit.>, Miranda and Zung <cit.>, and and Wacheux <cit.>): in a neighbourhood U of a non-degenerate singular point, one can find local symplectic coordinates (x_1, …, x_n, ξ_1, …, ξ_n) such that the symplectic form takes the form ω = ∑_i=1^n dx_i∧ dξ_i in U, and n functionally independent smooth integrals q_1, …, q_n : U → Poisson commuting with all f_1, …, f_n such that q_i is one of the following possible components: * regular component: q_i = x_i, * elliptic component: q_i = 1/2(x_i^2 + ξ_i^2), * hyperbolic component: q_i = x_iξ_i, * focus-focus components (exist in pairs): q_i = x_iξ_i + x_i+1ξ_i+1 and q_i+1 = x_iξ_i+1 - x_i+1ξ_i. We will eventually focus on 4-dimensional integrable systems. In that case, the following six different types of non-degenerate singular points can occur: * rank 0: elliptic-elliptic, hyberbolic-hyperbolic, elliptic-hyperbolic and focus-focus, * rank 1: elliptic-regular and hyperbolic-regular. Williamson <cit.> (see also Bolsinov and Fomenko <cit.>) showed that to determine the type of a non-degenerate rank 0 singular point of a 4-dimensional integrable system (M, ω, F=(f_1, f_2)), it is sufficient to find the eigenvalues for the Hessian of the linear combination c_1 f_1 + c_2 f_2 for generic c_1, c_2 ∈ at this singular point since * elliptic components have pairs of purely imaginary eigenvalues, * hyperbolic components have pairs of purely real eigenvalues, * focus-focus components have quadruples of complex eigenvalues with non-zero real- and imaginary parts. Note also that, if λ is an eigenvalue of multiplicity k, then so are -λ, λ, and -λ (cf. van der Meer <cit.>). Concerning monodromy, we note that if Λ is a (compact) leaf containing n singular points of which all are of focus-focus type, then it has been shown that the monodromy around Λ is given by M = [ 1 n; 0 1 ], see the works by Matsumoto <cit.>, Lerman and Umanskii <cit.>, Matveev <cit.>, and Zung <cit.>. This result will be drawn on again in our discussion of semitoric and hypersemitoric systems. §.§ Toric systems Let us start with the `easiest' class of integrable systems: Let (M,ω,F) be an integrable system with M compact and connected. If all integrals of (M,ω,F) generate an effective S^1-action, then the system is said to be a toric system. Atiyah <cit.> and Guillemin and Sternberg <cit.> showed that the image of the momentum map of a toric system is a convex polytope, called the momentum polytope. Later, Delzant <cit.> showed that toric systems are classified up to isomorphism by their momentum polytope. Delzant's classification was then extended to non-compact manifolds by Karshon and Lerman <cit.>. Note that the singular points of a toric system are all non-degenerate and only contain components of elliptic or regular type. §.§ Semitoric systems Delzant's <cit.> classification of toric manifolds has been generalized by Pelayo and Vũ Ngọc <cit.> together with Palmer and Pelayo and Tang <cit.> to the following class of integrable systems, called “semitoric systems”. Semitoric systems are a natural class of systems, generalizing toric systems by relaxing the assumption of periodicity on one of the integrals defining the system. Semitoric systems are closely related to so called almost-toric system, see for instance Symington <cit.> and Vũ Ngọc <cit.>. The notion “semitoric” is natural, and has been used in different contexts, including symplectic geometry of Hamiltonian torus action by Karshon and Tolman <cit.>, integrable systems Vũ Ngọc <cit.> and Pelayo and Vũ Ngọc <cit.>, partially equivariant embedding problems in toric geometry by Pelayo <cit.>, and mathematical physics by Martini and Taylor <cit.>. We refer to Pelayo <cit.> for further discussion and references. Let (M, ω, F=(J,H)) be a 4-dimensional integrable system, where M is connected. Then (M, ω, F=(J,H)) is a semitoric system if * J is proper and generates an effective S^1-action, * F has only non-degenerate singularities (if any) and none of them admit hyperbolic components. Note that, under the assumptions of Definition <ref>, Vũ Ngọc <cit.> showed that the fibres of F are connected, thus generalizing the connectivity statement from the toric case as shown by Atiyah <cit.> and Guillemin and Sternberg <cit.>. The main difference between toric and semitoric systems is the possible appearance of focus-focus singular points. Note that if c ∈ F(M) is a focus-focus singular value, then its preimage F^-1(c) has the shape of a so-called pinched torus where the number of pinches equals the number of focus-focus points in the fibre, cf. for instance Bolsinov and Fomenko <cit.>. Vũ Ngọc <cit.> showed that one can associate an equivalence class of polygons with the image of the momentum map of a semitoric system. But unlike to the toric case, this is not enough to classify semitoric systems. Pelayo and Vũ Ngọc <cit.> were able to classify so-called simple semitoric systems, i.e. semitoric systems for which each fibre of J contains at most one focus-focus point, by formulating the following five invariants: * the number of focus-focus points, * the Taylor series or singularity type invariant, * the polygon invariant, * the height invariant, and * the twisting index invariant. Palmer, Pelayo and Tang <cit.> extended the result to the non-simple case, building on the symplectic classification of multi-pinched focus-focus fibres by Pelayo and Tang <cit.>. The five invariants will be discussed further in Section <ref>, where also two examples will be covered, namely the coupled angular momenta (Section <ref>), and an example for which the polygon takes the shape of an octagon (Section <ref>). Other important examples of semitoric systems are the spherical pendulum (cf. Dullin <cit.>) and the Jaynes-Cummings model (cf. Babelon, Cantini and Douçot <cit.>, Pelayo and Vũ Ngọc <cit.>, and Alonso, Dullin and Hohloch <cit.>). §.§ Hypersemitoric systems Hohloch and Palmer <cit.> considered a yet more general class of integrable systems than semitoric systems by allowing for singular points with hyperbolic components and certain degenerate singular points, namely so-called parabolic singular points: a singular point p of an integrable system (M, ω, F=(f_1,f_2)) is parabolic if there exists a neighbourhood U ⊂ M of p with (generally non-canonical) coordinates (x, y, λ, ϕ) and functions q_i = q_i(f_1,f_2) for i ∈{ 1,2} of the form q_1 = x^2 - y^3 + λ y q_2 = λ. A coordinate free definition is given in Bolsinov, Guglielmi and Kudryavtseva <cit.>. Note that the same normal form in fact applies to parabolic orbits, which means that from the smooth point of view, there is only one type of degenerate singularities appearing in hypersemitoric systems (for more details, see Kudryavtseva and Martynchuk <cit.>). Parabolic points are also known under the name of cusps or cuspidal points. Moreover, parabolic points naturally appear as transition points between (families of) elliptic-regular and hyperbolic-regular points. The following definition generalizes the natural notions of toric and semitoric systems we have seen earlier in this paper, and appears in recent work by Hohloch and Palmer <cit.>, following also work by Kalashnikov <cit.> as explained below. A 4-dimensional integrable system (M, ω, F=(J,H)) is called hypersemitoric if * J is proper and generates an effective S^1-action, * all degenerate singular points of F (if any) are of parabolic type. Note that the existence of a global S^1-action prevents the appearance of hyperbolic-hyperbolic singularities in a hypersemitoric system. The original motivation for introducing this class, however, comes from the result of Hohloch and Palmer <cit.> stating that any 4-dimensional Hamiltonian system X_J which generates an effective S^1-action is extendable to a hypersemitoric system (M, ω, (J,H)). Furthermore, the set of hypersemitoric systems is open in the set of 4-dimensional integrable systems with a global effective Hamiltonian circle action (see Kalashnikov <cit.>). Dullin and Pelayo <cit.> showed that, starting with a semitoric system, one can use a subcritical Hamiltonian-Hopf bifurcation (which transforms a focus-focus point to an elliptic-elliptic point, see Sections <ref> and <ref>) to generate a flap (see Section <ref>) on said system, thus creating a hyperbolic semitoric system (cf. <cit.>). Although the name of this type of system is very similar to the name hypersemitoric, they are defined differently. Hyperbolic semitoric systems requires the same conditions as hypersemitoric systems for the integral J generating a circle action. However, the set of hyperbolic singularities in hyperbolic semitoric systems are required to be non-empty, and the set of degenerate singularities is required to be isolated, not necessarily of parabolic type. Nevertheless, many hypersemitoric systems can thus be generated by performing subcritical Hamiltonian-Hopf bifurcations, together with so-called blow-ups (also known as corner chops, see for instance Holoch and Palmer <cit.> and references therein) on the (newly generated) elliptic-elliptic points. §.§ Topological invariants: atoms and molecules Finally, we will recall a complete topological invariant for a generic isoenergy level of a two degree of freedom integrable system which was introduced by Fomenko and Zieschang <cit.>. This invariant is intimately linked to hyperbolic-regular and elliptic-regular points and naturally appears in (hyper)semitoric systems as well as in systems without a global S^1-action, which in fact form a majority of known integrable systems (including the Kovalevskaya top and many other integrable cases in rigid body dynamics, various geodesic flows, billiards, etc.). We will follow the presentation of Bolsinov and Fomenko <cit.>. Let f be a Morse function on a manifold M. Note that the leaves of f foliate the manifold. Let x ∼ y if and only if x and y are in the same leave of f and denote by Γ := M / ∼ the space of leaves of f. Since f is a Morse function Γ is in fact a graph, called the Reeb graph of f on M where singular leaves give rise to the vertices. There are two types of vertices: * a vertex is called an end vertex if it is the end of one edge only, * otherwise it is called an interior vertex. Note that the end vertices of a Reeb graph correspond to local minima and maxima (thus elliptic points) of the Morse function, whilst the interior vertices correspond to saddle-points (thus hyperbolic points). Let f M →ℝ be a Morse function on a 2-dimensional surface M. An atom is a tubular neighbourhood denoted by P^2 of a singular fibre f^-1(c) together with the fibration f P^2 →ℝ on this neighbourhood. The atom is orientable if the surface P^2 is orientable and non-orientable otherwise. We now give a brief overview of the so-called simple atoms, which are atoms whose singular fibres contain only one singular point and which are referred to as atom A, atom B and atom B. There exist many more atoms, which are defined similarly to the aforementioned ones. A more detailed exposition can be found in Bolsinov and Fomenko <cit.>. Let us first consider atom A, which represents the case of local minima or maxima of the function f. The Reeb graph of the atom is a line segment illustrating the energy levels of f together with an arrow pointing in the direction of increasing energy, and a symbol A illustrating the extrema. Thus, there exist two atoms of type A of which the associated Reeb graphs are sketched in Figure <ref>. One can do a similar construction for saddles. Note, however, that there exist both orientable and non-orientable saddles, and they lead to atoms of type B and B, respectively. One can generate such atoms by considering a cylinder and gluing a strip to one of its ends (more specifically, attaching an index-1 handle). If the strip is not twisted, this can be deformed to an orientable saddle, whilst if it is twisted, it can be deformed to a non-orientable saddle. Figure <ref> shows the Reeb graphs of these atoms. There also exist atoms with more than one singular point in the singular fibre (cf. Bolsinov and Fomenko <cit.>). However, these atoms still form two main types: the first type consists only of atoms A, whilst the second type consists of all other atoms (which are in fact saddle atoms). Let now (M, ω, (H,f)) be an integrable system on a symplectic 4-manifold M and let Q = {x ∈ M | H(x) = constant} be a `generic' so-called isoenergy 3-surface (see Bolsinov and Fomenko <cit.> for the exact conditions on Q). Let Q/∼ be the space of leaves, which can also be pictured as a (Reeb) graph where the vertices correspond to the singular leaves. Now, the singular leaves correspond to so-called 3-atoms, which are defined similarly to the atoms we saw before, but now the neighbourhoods are 3-dimensional. It turns out that these 3-atoms are in one-to-one correspondence with the set of 2-atoms possibly endowed with a finite number of marked points or stars – corresponding to exceptional fibres of the Seifert fibration naturally associated to a 3-atom, see Bolsinov and Fomenko <cit.>. For simplicity, 2-atoms with stars will also be referred to as 2-atoms. Thus, we will consider the graph defined by Q/∼ with the vertices corresponding to 2-atoms. This graph is called the molecule of (M, ω, (H,f)) on Q. A molecule contains a lot of information of the foliation of the isoenergy surface Q. But this type of molecule consists of atoms glued together so far without the knowledge of how this gluing is performed. Keeping track of the gluing gives us the final piece of information that we need to give a molecule the meaning of an invariant: the gluing is performed by the so-called gluing matrix C_i = [ α_i β_i; γ_i δ_i ]∈(2, ℤ), C = -1. To the gluing matrix C_i, there are two invariants assigned, namely r_i := α_i/β_i 1 β_i≠ 0, ∞ β_i = 0 and ϵ_i := sign β_i β_i≠ 0, sign α_i β_i = 0. These two invariants alone are not enough for our purposes, and so one more invariant has to be introduced. An edge e_i of a molecule W is called infinite, if r_i = ∞, and otherwise finite. Cutting the molecule along finite edges splits it into several connected components. The components not containing any atoms of type A are called families. Let U_k be a family. Recall that the edges of atoms are `oriented' by arrows. An edge in U_k is said to be outgoing if the arrow points from a vertex inside U_k to a vertex outside U_k. In the opposite case an edge in U_k is called incoming. If the edge joins a vertex inside U_k to another vertex inside U_k, then the edge is called interior. To each edge e_i in U_k we assign the following integer: Θ_i: = ⌊α_i/β_i⌋, e_i is an outgoing edge, ⌊-δ_i/β_i⌋, e_i is an incoming edge, -γ_i/α_i, e_i is an interior edge. With this, we construct the third, and final, invariant we want to associate to W, namely n_k := ∑_e_i∈ U_kΘ_i∈. The invariants r_i, ϵ_i and n_k will be called marks. One can now endow the molecule W with the three marks defined above, and define the marked molecule as the quadruple W^* := (W, r_i, ϵ_i, n_k). Fomenko and Zieschang <cit.> showed that two integrable systems on generic isoenergy 3-surfaces are Liouville equivalent if and only if their marked molecules coincide. Marked molecules are also known as Fomenko-Zieschang invariants. The collection of such marked molecules can be thought of as a topological portrait of the system, which contains more information than for example the topological types of the individual singular leaves/fibres. Since hypersemitoric systems only contain elliptic, hyperbolic-regular, focus-focus and parabolic points, but no hyperbolic-hyperbolic ones, one can show that marked loop molecules form complete local topological invariants of the torus fibration of a hypersemitoric system. In other words, the loop molecules around a given singularity of the hypersemitotic system determine its topological type. Note that the same is not true for general hyperbolic-hyperbolic singularities of integrable 2 degree of freedom systems; see Bolsinov and Oshemkov <cit.>. § SEMITORIC SYSTEMS In this section, we will briefly recall the construction of the five invariants of semitoric systems introduced by Pelayo and Vũ Ngọc <cit.> and its generalizations, then observe transitions from toric to semitoric systems by creating focus-focus points, and eventually consider some explicit examples. Two semitoric systems (M_1,ω_1,(J_1,H_1)) and (M_2,ω_2,(J_2,H_2)) are said to be isomorphic if there exists a symplectomorphism φ : M_1→ M_2 such that φ^*(J_2,H_2) = (J_1,f(J_1,H_1)) for some smooth function f such that ∂ f/∂ H_1 > 0. Since semitoric systems always come with a smooth, globally defined action J, this definition is basically saying that two semitoric systems are equivalent if and only if the corresponding Lagrangian fibrations are fibrewise symplectomorphic (up to possibly changing J to ± J +). Pelayo and Vũ Ngọc <cit.> showed that two simple semitoric systems are isomorphic if and only if all five invariants (defined below) are equal for the two systems. The simplicity assumption has been removed from the classification by Palmer, Pelayo and Tang <cit.>, but the invariants in the non-simple case are more complicated, and we do not present them here. §.§ The five semitoric invariants Let (M, ω, F=(J,H)) be a simple semitoric system. We will use the identification S^1 = /2π in what follows. Let us now explain each of the five invariants in more detail. §.§.§ Number of focus-focus points Vũ Ngọc <cit.> proved that M has a finite number of focus-focus singular points. Denoting this number by n_FF, one has thus 0 ≤ n_FF < ∞. Then n_FF forms an invariant for semitoric systems (cf. Pelayo and Vũ Ngọc <cit.>). §.§.§ Taylor series invariant Denote the focus-focus points of (M, ω, F=(J,H)) by m_i for 1 ≤ i ≤ n_FF. Let us now consider one focus-focus point, and denote it by m without the index, to simplify the notation. Recall from Section <ref> that there exists a neighbourhood U of m with symplectic coordinates (x,y,ξ,η) such that the quadratic parts of J and H span a Cartan subalgebra with the following basis: q_1 = xξ + yη, q_2 = xη - yξ. Note that the Hamiltonian flow generated by q_2 is 2π-periodic. We now follow the exposition in Vũ Ngọc <cit.>: Let Λ_z = F^-1(z) be a regular fibre near the singular fibre containing m. For any point A ∈Λ_z, denote by τ_1(z) the first return time of the flow generated by X_H to the X_J-orbit through A, and let τ_2(z) ∈/2π be the time it takes to close up this trajectory under the flow of X_J. Vũ Ngọc <cit.> showed that, for some determination of the complex logarithm ln z, then σ_1(z) := τ_1(z) + (ln z), σ_2(z) := τ_2(z) - (ln z) extends to smooth and single-valued functions in a neighbourhood of c = F(m). Moreover, σ := σ_1 dz_1 + σ_2 dz_2 yields a closed 1-form under the identification z=(z_1, z_2) ∈^2. Define S via dS = σ and S(c) = 0 and denote the Taylor series of S at z = c by (S)^∞. The Taylor series invariant, for all focus-focus points m_i, 1 ≤ i ≤ n_FF, is then given by the n_FF-tuple ((S_i)^∞)_i=1^n_FF. There is another way to define the Taylor series invariant. Let γ_z^1 and γ_z^2 be a basis of the first homology group of the torus Λ_z that varies smoothly with the base point z such that γ_z^1 is a representative of the cycle corresponding to the (periodic) flow of J and γ_z^2 represents a homology cycle obtained by first moving with the flow of X_H using time τ_1(z) and then with the flow of X_J using time τ_2(z). Now consider the action integral 𝒜(z) := ∫_γ_z^2α, where α is a primitive of ω on some neighbourhood of Λ_z. Then one finds for z≃(z_1,z_2) ∈^2 d𝒜(z) = τ_1(z) dz_1 + τ_2(z) dz_2. One can in fact interpret S as a regularised action integral via S(z) = 𝒜(z) - 𝒜(c) + (z ln z - z). Note that the above construction involves a certain number of choices which have to be made compatibly with the construction of the polygon invariant and the twisting index invariant below. The exact dependencies are explained in detail in the forthcoming article by Alonso, Hohloch, and Palmer <cit.>. §.§.§ Polygon invariant Let m_1, …, m_n_FF be the focus-focus points and denote by c_1:=F(m_1), …, c_n_FF:= F(m_n_FF) their values ordered such that the first coordinate of the focus-focus values increases. Denote by B := F(M) the image of the momentum map. Vũ Ngọc <cit.> showed that the set B_r ⊆ F(M) of regular values of F coincides with the set int B ∖{c_1, …, c_n_FF}. One can render B_r simply connected by making a vertical cut from each focus-focus value c_i either upwards or downwards to the boundary of F(M). By the Arnol'd-Liouville theorem, the momentum map induces an integral affine structure on B (which in general does not agree with the one induced by the inclusion of B into ^2). Recall that affine transformations leaving a vertical line invariant arise from vertical translations composed with a matrix of the form T^k := [ 1 0; k 1 ] with k ∈. Now denote by l_i⊂^2 the vertical line through the focus-focus singular value c_i∈^2. This line splits ^2 into two half-spaces. For k ∈, let t_l_i^k : ^2→^2 be the map that leaves the left half-space invariant and shears the right half-space by T^k. We accommodate now all focus-focus singular values by setting 𝐤 := (k_1, …, k_n_FF) and defining t_𝐤 := t_l_1^k_1∘…∘ t_l_n_FF^k_n_FF. For each 1 ≤ i ≤ n_FF, let ϵ_i∈{-1, +1}, and denote by l_i^ϵ_i the vertical half line starting at c_i, going upwards if ϵ_i = +1, and downwards if ϵ_i = -1, and let l^ϵ := l_1^ϵ_1∪ … ∪ l_n_FF^ϵ_n_FF be the union of the lines running through all focus-focus values for a choice of ϵ := (ϵ_1, … , ϵ_n_FF). Then the set B ∖ l^ϵ is simply connected for all possible choices of ϵ_i. Vũ Ngọc <cit.> showed that there exists a homeomorphism f:=f_ϵ : B →^2 depending on the choices of ϵ and preserving J such that f(B) is a rational convex polygon. Restricted to B∖ l^ϵ, the homeomorphism f becomes a diffeomorphism onto its image which sends the integral affine structure of B_r ∖ l^ϵ to the integral affine structure of ^2. The map μ := f ∘ F is called a generalized toric momentum map for (M, ω, F=(J,H)) (cf. Pelayo and Vũ Ngọc <cit.>). In order to turn the polygon Δ := μ(M) into an invariant of the underlying semitoric system one needs to get rid of the choices involved in the construction of Δ. This is done by means of a group action: consider the group 𝒢 := {T^k| k ∈} and the action of the group {-1, +1}^n_FF×𝒢 on (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) given by ((ϵ'_i)_i=1^n_FF, T^k) ·(Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) := (t_𝐮(T^k(Δ)), (l_i)_i=1^n_FF, (ϵ'_iϵ_i)_i=1^n_FF) where 𝐮 = ((ϵ_i- ϵ'_i)/2)_i=1^n_FF. Then the polygon invariant is the orbit of (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) under the above action (cf. Pelayo and Vũ Ngọc <cit.>). §.§.§ Height invariant For i ∈{1, …, n_FF}, consider the focus-focus singular points m_i and their images c_i := F(m_i) and let μ and Δ be as in Section <ref>. The height (or the volume) invariant, as introduced by Pelayo and Vũ Ngọc <cit.>, is given by the n_FF-tuple (h_1, …, h_n_FF) with h_i := pr_2(μ(m_i)) - min_s ∈ l_i∩Δpr_2(s), where pr_2 : ^2→ is the projection onto the second coordinate (in <cit.> it is explained how this height invariant corresponds to the volume of certain submanifolds, and hence it is sometimes called the volume invariant). The function h_i thus measures the distance between the focus-focus value in the polygon Δ=μ(M) and its lower boundary. Furthermore, h_i is independent of the choice of the generalized toric momentum map μ, since it can also be seen as the symplectic volume of certain level sets. §.§.§ Twisting index invariant Let U_i be a neighbourhood of a focus-focus singular point m_i∈ F^-1(c_i), and let V_i = F(U_i). Vũ Ngọc and Wacheux <cit.> showed that there exists a local symplectomorphism Ψ : (^4, ω_0) → (M, ω) sending the origin to m_i, and a local diffeomorphism G : ^2→^2 sending 0 to F(m_i) such that F ∘Ψ = G ∘ q_i, where q_i = (q_i^1, q_i^2) is given by (<ref>). Recall that q_i^2 generates a circle action, so it must correspond to J. If necessary, after composing Ψ with either/both of the canonical transformations (x, ξ) ↦ (-x, -ξ) and (x, y, ξ, η) ↦ (-ξ, -η, x, y), one finds that G is of the form G(q_i^1, q_i^2) = (q_i^2, G_2(q_i^1, q_i^2)), where [G_2]q_i^1(0) > 0. We will extend G_2(q_i^1, q_i^2) to another Hamiltonian function G_2(H, J), such that they are equal at their restriction to U_i. Here (H, J) is a new momentum map for the semitoric system, and G_2 : ^2→ is some function to be discussed further below. Recall the action integral introduced in the construction of the Taylor series invariant (see Subsection <ref>): 𝒜_i(z) := ∫_γ_i, z^2α. Let G_i(z) := 𝒜_i(z) - 𝒜_i(c_i) for i = 1, …, n_FF. Observe that G_i(0) is well defined and equal to zero since the actions 𝒜_i(z) are given by integrating a primitive 1-form over a loop on a Lagrangian torus Λ_z. Note that this could also have been seen by using the regularised action in (<ref>). Now, let us define the Hamiltonian function via H_i, p := G_i(J, H). Then lim_m → m_i H_i, p = 0. Note also that, by (<ref>), we get a Hamiltonian vector field X_i, p = (τ_i^1∘ F) X_J + (τ_i^2∘ F) X_H. This was discussed by Pelayo and Vũ Ngọc <cit.>. They called the momentum map ν := (J, H_i, p) the privileged momentum map for F = (J, H). Now, let μ be a generalized toric momentum map. As μ preserves J, its components satisfy (μ_1, μ_2) = (J, μ_2). As μ_i, J and H_i,p are all action variables, there exists an invertible matrix A ∈GL(2, ) such that (X_J, X_μ_2) = A(X_J, X_i, p). The matrix has to be of the form A = [ 1 0; k_i 1 ], hence X_μ_2 = k_i X_J + X_i, p. Pelayo and Vũ Ngọc <cit.> showed that k_i does not depend on X_i, p or G_i. The integer k_i is called the twisting index. Note that, if k_i is the twisting index of m_i, then locally μ = T^k_iν. Also, if the polygon is transformed by some T^r, then ν does not change, whilst μ→ T^rμ. Note that the twisting index depends on the polygon Δ. To introduce an actual invariant, similarly to Subsection <ref>, we consider the orbit of (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF, (k_i)_i=1^n_FF) under the action of {-1, +1}^n_FF×𝒢. Specifically, with 𝐮 := (u_i)_i=1^n_FF := ((ϵ_i-ϵ_iϵ'_i)/2)_i=1^n_FF, the action is given by ((ϵ'_i)_i=1^n_FF, T^k) · (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF, (k_i)_i=1^n_FF) = (t_𝐮(T^k(Δ)), (l_i)_i=1^n_FF, (ϵ'_iϵ_i)_i=1^n_FF, (k + k_i + ∑_j=1^ĩ_i u_j)_i=1^n_FF) where we set 0=:∑_j=1^0 u_j and where ĩ_i =i or ĩ_i= i-1 depending on the choice of certain conventions. This orbit is called the twisting index invariant (cf. Pelayo and Vũ Ngọc <cit.>). Note that the above formula differs slightly from the original one given in Pelayo and Vũ Ngọc <cit.> by the extra term ∑_j=1^ĩ_i u_j. This term accounts for the way in which changing cut directions affects the twisting index. Its absence in the original formula was pointed out to us by Yohann Le Floch and Joseph Palmer (for a detailed discussion, we refer to the forthcoming paper by Alonso, Hohloch, and Palmer <cit.>). §.§ Modifications and generalizations of the five invariants In fact, all five invariants are intimately related, and there is no need to consider them separately. Le Floch and Palmer <cit.> took three of the five semitoric invariants — the number of focus-focus points, the polygon invariant, and the height invariant — and joined them together to form a single invariant, called the marked semitoric polygon invariant. When Palmer, Pelayo and Tang <cit.> extended the classification to non-simple semitoric systems they gathered all five invariants into one big invariant, called the complete semitoric invariant. §.§ Supercritical Hamiltonian-Hopf bifurcation If one perturbs a toric system, one may obtain a semitoric system, in particular if an elliptic-elliptic point is transformed into a focus-focus point. Such a transformation is called a supercritical Hamiltonian-Hopf bifurcation. In coordinate form, it can more specifically be defined as follows (see in particular Equation (<ref>) below with a >0). Let 𝔊 be a Lie group acting on the space of smooth real-valued functions C^∞(^n) whose action is defined by g · f(x) = f(g^-1(x)) for g ∈𝔊, f ∈ C^∞(^n) and x ∈^n. Furthermore, let [x] denote the space of polynomials on ^n, and let [x]^𝔊 be the space of 𝔊-invariant polynomials. Hilbert showed that, if 𝔊 is compact, then there exist finitely many invariant polynomials ρ_i∈[x]^𝔊 for i = 1, …, k which generate [x]^𝔊 as an algebra (cf. van der Meer <cit.>). Such invariant polynomials ρ_i are called Hilbert generators. Let (x, y, ξ, η) be canonical coordinates on ^4 and define the following three Hilbert generators: J = x η - y ξ, X = 1/2(ξ^2 + η^2), and Y = 1/2(x^2 + y^2). When considering (hyper)semitoric systems, we will choose 𝔊 = S^1 to be given by the periodic Hamiltonian flow of X_J. Then van der Meer <cit.> showed that there exists the following equivariant normal form for a Hamiltonian-Hopf bifurcation Ĥ_s = J + X + s Y + a Y^2, where s, a ∈ are parameters with a ≠ 0, which we for simplicity take as a definition for this type of bifurcation. If a > 0 the bifurcation is called supercritical, and subcritical otherwise. Note that here the momentum map is given by (J, Ĥ_s). Recall that the singular points in a 2-degree of freedom toric system all have only elliptic and/or regular components. If we perturb one of the integrals of a 2-degree of freedom toric system as in the above normal form, then we can make one of the elliptic-elliptic singular points turn into a focus-focus point. On the level of eigenvalues, 4 purely imaginary eigenvalues at an elliptic-elliptic point collide when the bifurcation parameter attains the value s = 0 and then change into four complex eigenvalues (cf. van der Meer <cit.>). One can see two examples of supercritical Hamiltonian-Hopf bifurcations in Figure <ref> and Figure <ref>. The subcritical case, when the sign of a is negative, is treated in Section <ref>. §.§ Examples To compute the semitoric invariants explicitly for given systems has proven to be very difficult since it needs the combination of theoretical knowledge and strong computational skills. §.§.§ Coupled angular momenta system Consider the manifold M := S^2× S^2 and equip it with the symplectic form ω := - (R_1ω_S^2⊕ R_2ω_S^2) where ω_S^2 is the standard symplectic form on S^2 and R_1, R_2∈^>0. When Sadovskií and Zhilinskií <cit.> studied the so-called coupled angular momenta system, they found a focus-focus point and nontrivial monodromy. Since this system is both interesting from a physics point of view and not very complicated from a mathematical point of view, it recently became a popular subject to study. Le Floch and Pelayo <cit.> showed that the coupled angular momenta system on M, given in Cartesian coordinates by J(x_1,y_1,z_1,x_2,y_2,z_2) := R_1(z_1-1) + R_2(z_2+1), H(x_1,y_1,z_1,x_2,y_2,z_2) := (1-t)z_1 + t(x_1x_2 + y_1y_2 + z_1z_2), describes a semitoric system for all t ∈∖{t^-,t^+}, where t^± := R_2/2R_2 + R_1∓ 2√(R_1R_2). The system has four singular points of rank 0 which are located at the top and bottom of the spheres, i.e. when (z_1,z_2) = (± 1, ± 1). Three of the points are always elliptic-elliptic, whilst (1, -1) is a focus-focus point if t^- < t < t^+ and elliptic-elliptic if t < t^- or t > t^+. Thus, the number of focus-focus points invariant is 0 if (1, -1) is elliptic-elliptic, or 1 if (1, -1) is focus-focus. For some values of t, the moment image is plotted in Figure <ref>. Le Floch and Pelayo <cit.> computed, for certain parameter values, the first two terms of the Taylor series, the polygon, and the height invariant for this system. The full classification was achieved by Alonso, Dullin and Hohloch <cit.>. The semitoric invariants of the coupled angular momenta system are as follows: The number of focus-focus points is either zero or one, see above. The Taylor series invariant is of the form S(j,k) = j arctan( R_2^2(2t - 1) - R_1R_2(t + 1) + R_1^2t/(R_1 - R_2)R_1 r_A) + k ln( 4 R_1^5/2 r_A^3/R_2^3/2(1 - t) t^2) + j^2/16 R_1^4 R_2 r_A^3( R_2^4(2t - 1)^3 - R_1R_2^3(32t^3 - 46t^2 + 17t - 1) - 3R_1^2R_2^2t(4t^2 - 7t + 1) + R_1^3R_2(3 - 5t)^2 - R_1^4t_3) + jk(R_2 - R_1)/8R_1^3R_2r_A^3( R_2^2(2t - 1)^2 - 2R_1R_2t(6t - 1) + R_1^2t^2) + k^2/16R_1^4R_2r_A^3( R_2^4(2t - 1)^3 - R_1R_2^3(16t^3 - 42t^2 + 15t + 1) - R_1^2R_2^2t(28t^2 - 3t -3) + R_1^3R_2t^2(13t - 3) + R_1^4t^3) + 𝒪(3), where r_A = √((R_1^2 + 4R_2^2)(t - t^-)(t^+ - t)). The polygon and twisting index invariants are illustrated in Figure <ref>. Set R:= R_2/R_1. Then the height invariant of the coupled angular momenta is given by h = 2 min(R_1, R_2) + R_1/π t( r_A - 2 R t arctan( r_A/R - t) - 2 t arctan( r_A/R + t - 2 R t) ). §.§.§ The (semi)toric octagon system De Meulenaere and Hohloch <cit.> constructed a semitoric system with four focus-focus singular points. The system was created by first considering the octagon Δ obtained by chopping off the corners of the square [0, 3] × [0,3]. Since Δ turned out to be a Delzant polygon, Delzant's <cit.> construction could be used to construct a toric system which has Δ as image of the momentum map. This is done by means of symplectic reduction of ^8 (equipped with its standard symplectic structure) and yields a 4-dimensional, compact, connected, symplectic manifold (M_Δ, ω_Δ). A point on M_Δ is written as an equivalence class of the form [z] = [z_1, …, z_8] with z_i∈ for i = 1, …, 8. The (toric) momentum map F = (J, H):(M_Δ, ω_Δ) →^2 is given by J([z_1, …, z_8]) = 1/2z_1^2, H([z_1, …, z_8]) = 1/2z_3^2. Denote by the real part of a complex number. By perturbing H to H_t: = (1-2t) H + t γ( z̅_2z̅_3z̅_4z_6z_7z_8) for 0 < γ < 1/48, De Meulenaere and Hohloch <cit.> obtained a system with momentum map (J, H_t):(M_Δ, ω_Δ) →^2 that is toric for 0 ≤ t < t^-, semitoric for t^- < t < t^+, and toric again for t^+ < t ≤ 1, where t^- := 1/2(1 + 24 γ) and t^+ := 1/2(1 - 24 γ). Note that 0 < t^- < 1/2 and 1/2 < t^+ < 1. At t = 1/2, the system has two focus-focus fibres, each containing two focus-focus points, see Figure <ref>. The two fibres then have the shape of double pinched tori. Apart from one representative of the polygon invariant and the number of focus-focus point, no semitoric invariants have yet been calculated. §.§ State of the art concerning other semitoric systems Spread over the literature (cf. works by Babelon, Dullin, Le Floch, Pelayo, Vũ Ngọc, and others), there are various partial results concerning the computation of the semitoric invariants for certain parameter values for certain systems. For instance, a Taylor series type invariant has been calculated by Dullin <cit.> for the spherical pendulum (which is, strictly speaking, not a semitoric system due to lack of properness). Pelayo and Vũ Ngọc <cit.> computed the number of focus-focus points, the polygon, and the height invariant for the so-called coupled spin oscillator system. Alonso, Dullin and Hohloch <cit.> completed the set of semitoric invariants for this system by computing the Taylor series and twisting index invariant. Both of these systems have only one focus-focus point. Hohloch and Palmer <cit.> generalized the coupled angular momenta system to a family of semitoric systems with two focus-focus points. Alonso and Hohloch <cit.> computed the polygon and height invariant for a subfamily and Alonso, Hohloch and Palmer <cit.> are currently computing its twisting index invariant. Le Floch and Palmer <cit.> devised semitoric systems arising from Hirzebruch surfaces and computed their number of focus-focus points, the polygon invariant, and, for certain parameter values, also their height invariant. § HYPERSEMITORIC SYSTEMS In this section, we give a brief overview of existing and related results concerning hypersemitoric systems. Recall that, compared to semitoric systems, a hypersemitoric system (Definition <ref>) may in addition have singular points with hyperbolic components and degenerate singular points of parabolic type. §.§ Flaps and pleats/swallowtails Two possibilities of how hyperbolic-regular and parabolic points occur in hypersemitoric systems are so-called flaps and pleats/swallowtails. A good exposition with examples for pleats/swallowtails can be found in Efstathiou and Sugny <cit.>, and for flaps see Efstathiou and Giacobbe <cit.>. There are various ways to visualize flaps and pleats/swallowtails. Instead of using the image of the momentum map over which a hypersemitoric (or even more general) system gives rise to a singular fibration with possibly disconnected fibres, it makes sense to remember the branching and disconnectedness by working with the so-called bifurcation complex (also known as unfolded momentum domain). One can either identify it with the leaf space of a system (M, ω, F=(J, H)) or describe it directly as a stratified manifold V together with a map F̃: M → V and a projection τ: V →^2 such that τ∘F̃ = F and the regular level sets of F̃ correspond to the connected components of the level sets of F. We will summarize some of their findings. In the preimage under τ of a sufficiently small neighbourhood of a parabolic value, the bifurcation complex has two sheets: one sheet, the local base ℬ, contains regular values and a compact line segment ℒ of hyperbolic-regular values, and one sheet, the local flap ℱ, contains a line of elliptic-regular and of hyperbolic-regular values (which meet at a parabolic value) as well as regular values `between' these lines, see Figure <ref>. Both sheets intersect (or rather touch) each other along the line segment of hyperbolic-regular values including its parabolic end point. The topological boundary of ℱ consists of the line segments of elliptic-regular and hyperbolic-regular values joint at the parabolic value and a line of regular values, called the free boundary. Flaps and pleats/swallowtails now arise as follows: Consider a system with a compact line segment ℒ of hyperbolic-regular values with parabolic end points denoted by c_1 and c_2. For i ∈{ 1,2}, let ℬ_i be their local bases and ℱ_i their local flaps. If one glues the free boundary of ℱ_1 to the free boundary of ℱ_2, this will define a flap topology around ℒ, see Figure <ref>. If the free boundary of ℱ_1 is glued to the boundary of ℬ_2, and the free boundary of ℱ_2 is glued to the boundary of ℬ_1, this will define a pleat topology, see Figure <ref>. Efstathiou and Giacobbe <cit.> showed that the bifurcation complex in an open neighbourhood of ℒ can have either the pleat topology or the flap topology. Efstathiou and Giacobbe <cit.> proved another interesting result: Let p and q be coprime integers and let S^3 := { (z_1, z_2) ∈^2|z_1^2 + z_2^2 = 1 } be the unit sphere in ^2. Consider the (free) action of _p := /p on S^3 given by (z_1, z_2) ↦(exp(2 π i / p) z_1, exp(2 π i q / p) z_2). The lens space L(p,q) := S^3 / _p is the orbit space defined by this action. Then, with ℒ as above, the type of lens space L(p, 1) topologically embedded in F^-1(ℒ) determines the monodromy of the Lagrangian fibration in a neighbourhood of ℒ up to a sign determined by the choice of orientations. §.§ Subcritical Hamiltonian-Hopf bifurcations Recall from Section <ref>, that a semitoric system with focus-focus points may arise via supercritical Hamiltonian-Hopf bifurcations from a toric one. Analogously, a hypersemitoric system with flaps may arise from a semitoric one with focus-focus points via so-called subcritical Hamiltonian-Hopf bifurcations by `replacing' a focus-focus point by a (small) flap, see for instance Dullin and Pelayo <cit.>. To be more precise, recall the normal form Ĥ_s = J+ X + s Y + a Y^2 from Equation (<ref>): If the sign of a is negative, then a focus-focus point (four complex eigenvalues) will first turn into a degenerate point (two purely imaginary eigenvalues of multiplicity 2) and then will bifurcate into an elliptic-elliptic point (four purely imaginary eigenvalues) from the value of which, lying on a flap, two lines of elliptic-regular values emanate that connect the elliptic-elliptic value to the parabolic values (cf. Section <ref>). The parabolic values are connected by a line of hyperbolic-regular values. In Figure <ref>, an example of a semitoric system that went through a subcritical Hamiltonian-Hopf bifurcation is displayed. §.§ Atoms, molecules, and classifications Recall from Section <ref> the notion of a marked molecule W^*, which is a complete isoenergy invariant of a 2 degree of freedom integrable system. The topology caused by the lines of elliptic-regular and hyperbolic-regular values in flaps and pleats (swallowtails) can in particular be described by marked molecules. Here one can consider `loop molecules' (see Figure <ref>) around the parabolic values with B-atoms describing the bifurcation of one of the two lines emanating from the cusp and A-atoms the other bifurcation. The important result in this context is that the loop molecule around the cusp is uniquely defined and moreover `knows' what happens in its vicinity, in the sense that the loop molecule completely determines the topology of the corresponding singular torus fibration. This result directly follows from the fact that a single parabolic orbit (more precisely, the associated compact singular fiber, which has the form of a cuspidal torus) gives rise to only one singular torus fibration up to a fibrewise homeomorphism, see Efstathiou and Giacobbe <cit.>. We conjecture that more is true in fact and there is only one such torus fibration up to fibrewise diffeomorphisms, cf. Kudryavtseva and Martynchuk <cit.>. A similar topological result is known for elliptic-elliptic, elliptic-hyperbolic and focus-focus singularities of integrable systems on 4-manifolds, but not so for hyperbolic-hyperbolic singularities (having multiple hyperbolic-hyperbolic points on a singular fiber) which are in general not determined by their loop molecules only, see for instance <cit.>. Interestingly, in the smooth case, the fibrewise classification turns out to be different also in the case of focus-focus singularities (having multiple points on the same singular fibre), see Bolsinov and Izosimov <cit.>. The fibres of hypersemitoric systems will be classified by means of a `labeled graph' in the forthcoming paper by Gullentops and Hohloch <cit.> which extends the special case of hyperbolic-regular fibres studied in Gullentops' thesis <cit.>. §.§ Examples Hypersemitoric systems were first defined in Hohloch and Palmer <cit.> who gave several examples for this class of systems. There are more examples in the paper by Gullentops and Hohloch <cit.> and Gullentops' thesis <cit.>. §.§.§ Hypersemitoric coupled angular momenta system Let J and H be as in the (semitoric) coupled angular momenta system, as discussed in Section <ref>. We will now modify H, such that we instead consider the following: H̃(x_1,y_1,z_1,x_2,y_2,z_2) := H(x_1,y_1,z_1,x_2,y_2,z_2) + sz_1^2, with parameter s ∈. Then, it turns out that the image of the momentum map F̃ = (J,H̃), when the coupling parameter is t = 0.5 for which we always have a focus-focus value in the semitoric case (i.e. s = 0), we can generate flaps and pleats, see Figure <ref>. It turns out that the point p_1 = (0,0,1,0,0,-1) is of focus-focus type if s_p_1^- < s < s_p_1^+, where s_p_1^± = R_1± 2 √(R_1 R_2)/4R_2. If s < s_p_1^- or s > s_p_1^+, then p_1 is of elliptic-elliptic type. Numerics indicates that, if R_1 < R_2, for s < s_p_1^- a flap appears, and for some s > s_p_1^+, then a pleat appears. If s ∈{s_p_1^-,s_p_1^+}, then (0,0,1,0,0,-1) is a degenerate singularity. This can be shown by a similar procedure as in Le Floch and Pelayo <cit.>. Furthermore, the point p_2 = (0,0,-1,0,0,1) is a focus-focus point if s_p_2^- < s < s_p_2^+, where s_p_2^± = R_1± 2 √(R_1 R_2) + 2R_2/4R_2. When s < s_p_2^-, then F̃(p_2) is an elliptic-elliptic value on the boundary of the momentum map image. For some s > s_p_2^+ we have that F̃(p_2) is an elliptic-elliptic value which joins the pleat created by p_1, see Figure <ref>. §.§.§ The hypersemitoric octagon system A specific family of examples can be created by taking the toric octagon system constructed in De Meulenaere and Hohloch <cit.> and, instead of perturbing it only to a semitoric system (cf. Section <ref>), more perturbation terms can be added to obtain a family of hypersemitoric systems. To be more precise, let F=(J, H) be as in Section <ref> and modify H to H_t with t = (t_1, t_2, t_3, t_4) ∈^4 via setting H_t := (1 - 2t_1)H + ∑_i=1^4 t_iγ_i, with γ_1([z]) := 1/50( z̅_2z̅_3z̅_4z_6z_7z_8), γ_2([z]) := 1/50z_5^4z_4^4, γ_3([z]) := 1/50z_4^4z_7^4, γ_3([z]) := 1/50z_5^4z_7^4. Gullentops and Hohloch <cit.> proved the appearance of flaps and pleats/swallowtails and their collisions for certain values of the parameter t, see for example Figure <ref>. Moreover, they studied the shape and topology for hyperbolic-regular fibres in the system (J, H_t) and showed that, for fibres over a hyperbolic-regular value, not only double tori (`two tori stacked on top of each other' resp. a figure eight loop times S^1) are possible, but that the number of `tori stacked on top of each other' possibly appearing as fibre of a hyperbolic-regular value is bounded from above by 13.
http://arxiv.org/abs/2307.03874v1
20230708013842
The geometry of the Thurston metric: a survey
[ "Huiping Pan", "Weixu Su" ]
math.GT
[ "math.GT", "math.CV", "math.DG", "32G15, 30F45, 30F60" ]
New Constraints on ALP Electron and Photon Couplings from ArgoNeuT and the MiniBooNE Beam Dump Jaehoon Yu ============================================================================================== This chapter is a survey about the Thurston metric on the Teichmüller space. The central issue is the constructions of extremal Lipschitz maps between hyperbolic surfaces. We review several constructions, including the original work of Thurston. Coarse geometry and isometry rigidity of the Thurston metric, relation between the Thurston metric and the Thurston compactification are discussed. Some recent generalizations and developments of the Thurston metric are sketched. Mathematical classification (2010) 32G15; 30F45; 30F60. plain
http://arxiv.org/abs/2307.05588v2
20230710155742
Collaborative Song Dataset (CoSoD): An annotated dataset of multi-artist collaborations in popular music
[ "Michèle Duguay", "Kate Mancey", "Johanna Devaney" ]
cs.SD
[ "cs.SD", "eess.AS" ]
[ Amitabh Basu August 12, 2023 =================== The Collaborative Song Dataset (CoSoD) is a corpus of 331 multi-artist collaborations from the 2010–2019 Billboard “Hot 100” year-end charts. The corpus is annotated with formal sections, aspects of vocal production (including reverberation, layering, panning, and gender of the performers), and relevant metadata. CoSoD complements other popular music datasets by focusing exclusively on musical collaborations between independent acts. In addition to facilitating the study of song form and vocal production, CoSoD allows for the in-depth study of gender as it relates to various timbral, pitch, and formal parameters in musical collaborations. In this paper, we detail the contents of the dataset and outline the annotation process. We also present an experiment using CoSoD that examines how the use of reverberation, layering, and panning are related to the gender of the artist. In this experiment, we find that men's voices are on average treated with less reverberation and occupy a more narrow position in the stereo mix than women's voices. § INTRODUCTION As far back as the 1960s, Billboard charts have featured collaborations between independent acts. In recent years, however, the number of songs featuring a collaboration between artists has skyrocketed <cit.>. Part of this is due to the rising popularity of hip-hop in the 1980s, in which collaboration between different artists is a fixture. The 1986 version of “Walk This Way” by Aerosmith and Run DMC is an oft-cited example of such a collaboration. As Rose notes, the success of a collaboration between a hip-hop group (Run DMC) and a rock group (Aerosmith) “brought [hip-hop’s] strategies of intertextuality into the commercial spotlight” <cit.>. The 1990 success of “She Ain’t Worth It” by Glenn Medeiros ft. Bobby Brown marked the first time a sung and rapped collaboration reached #1 on Billboard’s “Hot 100.” Molanphy notes that during this period, multi-artist collaborations crystallized into two different frameworks: the “featured bridge rapper,” and the “featured hook singer” <cit.>. Subsequently, tracks with one or more guest artist(s) have become a mainstay on the charts. By 2021, over a third (39%) of the songs in Billboard's “Hot 100” year-end chart credited more than one artist. Consider for instance “Save Your Tears,” by singers The Weeknd & Ariana Grande, which occupied second place on the chart. A solo version of the song originally appeared on The Weeknd’s album After Hours (2020). While this version achieved commercial success, the remix with Ariana Grande became a #1 single on the Billboard “Top 100” in May 2021 and became the longest-charting collaboration in Billboard “Hot 100” history. In the remix, Grande performs approximately half of the vocals, transforming the solo song into a dialogue between two characters. The collaboration between the two artists is responsible for the popularity of the remix, inviting both Grande’s and The Weeknd’s fans to stream, buy, and otherwise engage with the song. Several musicological studies have examined this relationship between collaborative songs and commercial success <cit.>. Other work has provided in-depth explorations of the musical characteristics of collaborative songs, with a particular focus on hip-hop <cit.>. Given the popularity of multi-artist collaborations, a more systematic exploration of their musical features is warranted. In this paper, we introduce the Collaborative Song Dataset (CoSoD), an annotated dataset that facilitates the study of various musical features in multi-artist collaborations. CoSoD provides metadata and analytical data for 331 multi-artist collaborations appearing on the Billboard “Hot 100” year-end charts between 2010 and 2019. The dataset also provides timed annotations on the song's formal structure, artists' gender, vocal delivery and pitch, and vocal production (reverberation, panning, and layering). As detailed in Section 2, the range of features included in the dataset makes it more broadly applicable for MIR research tasks. These include structural segmentation, vocal mixing, automatic music production, and examinations of gender in popular music. After outlining the contents of the dataset and the annotation methodology in Section 3, we present an experiment in Section 4 that examines the relationship between vocal production parameters and the gender of the performer in a subset of CoSoD. § RELATED WORK CoSoD complements the growing list of annotated datasets that provide information on song structure in various popular music genres, e.g.,<cit.>, and is the first dataset to exclusively contain data on collaborative songs between independent acts. It can thus be used for training and evaluating structural segmentation tasks and for studying the specific structural characteristics of collaborative songs. CoSoD also complements existing datasets for multi-track mixing/analysis<cit.> and vocal analysis<cit.> by providing analytical annotations on the treatment of the voice in a mix. In recent years, several studies have proposed tools and methods to automate the mixing of multi-track recordings<cit.>. Such automatic production methods have various artistic and creative applications. One framework has been suggested to remix early jazz recordings, which are pre-processed using source separation then remixed with automatic production tools<cit.>. <cit.> proposes a prototype for an automatic DJ mixing system allowing for cross-fading via beat and tempo adjustment between songs. Studies on automatic mixing can be enhanced by knowledge of common mixing practices for specific instruments or sound sources. For instance, one study uses mixing practices that are consistent between mixing engineers to create a model that automatically mixes multiple drum tracks<cit.>. By focusing on vocals, which are a salient component of the mix in popular music<cit.>, CoSoD provides a complementary approach to these studies on automated production. By providing annotations based on close listening of specific vocal mixing parameters in the different formal sections of a song, the dataset allows for the identification of trends in panning, layering, and use of artificial reverberation as they are applied to vocals in commercially successful post-2010 popular music. It enables the direct comparison of how various mixing parameters are applied to individual artists' voices within and across songs. In addition to facilitating the modeling of voice mixing, CoSoD also allows musicologists to ask questions about the way different voice types and individuals are mixed. Finally, CoSoD facilitates the study of the relationship between gender and popular music. A number of previous studies have examined music programming and streaming services, exploring for instance how listeners tend to stream male artists more than women and mixed-gender groups<cit.>. Watson discusses gender inequality and low programming of women’s music in country music radio<cit.>. Other work addresses how a listener’s declared gender impacts automatic music recommendation<cit.> and musical preferences<cit.>. Additionally, various studies have addressed race and gender, along with sexist and racist discourses and practices, as they impact the music industry in general and the Billboard charts in particular<cit.>. By providing data on musical features, gender, and the role of these parameters within the formal structure of a song, CoSoD offers a new and complementary angle for the study of gender as it directly relates to the musical content of post-2010 popular collaborations. § COLLABORATIVE SONG DATASET (COSOD) CoSoD[<https://github.com/duguay-michele/CoSoD>] consists of metadata and analytical data of a 331-song corpus comprising all multi-artist collaborations on the Billboard “Hot 100” year-end charts published between 2010 and 2019. Each song in the dataset is associated with two CSV files: one for metadata and one for analytical data. We assembled the corpus by identifying every song on the charts that featured collaborations between two or more artists who usually perform independently from one another. §.§ Annotation of Musical Features The following analytical data is provided for each song in the dataset: 0.4cm * Index number: 1 to 33 * Time stamps: In seconds (start of new section) * Formal section label: Introduction, Verse, Pre-chorus, Chorus, Hook, Dance Chorus<cit.>, Link, Post-chorus, Bridge, Outro, Refrain or Other * Name of artist(s): Full name of the artist performing in each section. If all artists credited on the Billboard listing perform in a section, the label both or all is used. Songs were assigned at random to one of two annotators, who generated time stamps at the onset of each formal section with Sonic Visualiser.[The first annotator (first author) has a doctorate in music theory, while the second (second author) is a doctoral candidate in the same field.] The annotators provided formal labels according to their analysis of the song. In case of ambiguity in the formal sections, both annotators discussed the analysis and agreed upon an interpretation. For each formal section performed by one artist only, the following analytical data on the voice is provided: * Gender of artist: M (Man), W (Woman), NB (Non-binary) * Function of artist: Feat (Featured artist), Main (Main artist), Neither, Uncredited * Style of vocal delivery: R (Rapped vocals), S (Sung vocals), Spoken * Minimum pitch value: In Hz * First quartile pitch value: In Hz * Median pitch value: In Hz * Third quartile pitch value: In Hz * Maximum pitch value: In Hz * Environment value: On a scale of E1 to E5 * Layering value: On a scale of L1 to L5 * Width (panning) value: On a scale of W1 to W5 The annotators determined the name of the artist(s) performing in each section by ear, and using song lyric website Genius.com to validate their hearing. In cases where an artist only provides minimal background vocals (a few words) in a particular formal section, their name is not included. One annotator then provided analytical data on each formal section performed by one artist only. Data on gender was gathered from media interviews and social media statements from the artists, and matches the artist's gender identity at the time of the dataset creation. This methodology yielded three categories: man, non-binary, and woman. We understand these labels as umbrella terms that encompass a variety of lived experiences that intersect with race, sexuality, and other power structures. The style of vocal delivery was determined by ear. The distinction between rapping and singing is porous, with many vocalists adopting ambiguous modes of vocal delivery. We consider any formal section containing a melodic line performed with sustained pitches as sung. The pitch data was obtained by first isolating the vocals from the full mix using Open-Unmix<cit.> and then running the pYIN Smoothed Pitch Track transform <cit.> on the isolated vocal file. The minimum, first quartile, median, third quartile, and maximum pitch points in each formal section were calculated and recorded in the dataset.[The accuracy of the F0 estimates used to calculate this feature is impacted by the quality of the vocal source separation. A more accurate isolated vocal file would allow for more precise pitch data. Additionally, since pYIN Smoothed Pitch Track can only track a single melodic line, the accuracy of the pitch data is lessened in sections that feature multiple vocal layers with different pitch content.] The Environment, Layering, and Width values were determined by the first annotator to ensure consistency. Rather than attempting to reconstruct the mixing process itself, the annotations for these parameters represent the way a listener might perceive the final mix upon listening to it on stereo speakers. The Environment of a voice is the space in which the voice reverberates. Environment values were determined via an aural analysis of the full track by using the following scale[The scales were initially published in <cit.>.]: 0.7cm E1: The voice’s environment sounds flat. There might be minimal ambiance added to the voice, but there is no audible echo or reverberation. E2: The last word or syllable of most musical phrases is repeated through an echo or reverberation effect. E3: The vocal line is repeated in one clear layer of echo. This added layer may be dry or slightly reverberant and has a lower amplitude than the main voice. E4: The main voice is accompanied by a noticeable amount of reverberation. There is no clear echo layer, but rather a sense that the main voice is being reverberated across a large space. E5: The main voice is accompanied by two or more layers of echo. The echo layers may be noticeably reverberant, similar in amplitude to the main voice, and difficult to differentiate from one another. The Layering of a voice refers to the additional vocal tracks that are dubbed over a single voice. Layering values were determined via an aural analysis of the full track by using the following scale: 0.7cm L1: The voice is presented as solo. Occasionally, a few words may be doubled with another vocal track for emphasis. Double-tracking is often used in the mixing process to create a fuller sound, with a final result sounding like a single vocal layer. Such cases fall into this category. L2: The voice is presented as solo, but additional vocal layers are added at the end of musical phrases for emphasis. L3: The main voice is accompanied by one or two layers. Layers might provide minimal harmonies or double the main voice. The layers have a noticeably lower amplitude than the main voice. L4: The main voice is accompanied by two or more layers. These layers are close copies of the main voice, sharing the same pitch and similar amplitude. L5: The main voice is accompanied by two or more layers. These layers add harmonies to the main voice, creating a thick and multi-voiced texture. The Width of a voice refers to the breadth it occupies on the stereo stage. The Width was analyzed aurally with the aid of panning visualisation tool MarPanning<cit.>. The annotator simultaneously listened to the isolated vocal audio and observed the MarPanning visualization generated from the isolated vocals to determine the Width value. Since Open-Unmix occasionally omits reverberated components of the voice from the isolated file, the analyst then listened to the full track to confirm the Width value. Width values were determined according to the following scale: 0.7cm W1: The voice occupies a narrow position in the center of the stereo stage. W2: The voice occupies a slightly more diffuse position in the center of the stereo stage. W3: The main voice occupies a narrow position in the center of the stereo stage, but some of its components (echo, reverberation, and/or additional vocal tracks) are panned toward the sides. These wider components have a lower amplitude than the main voice. W4: The main voice occupies a slightly more diffuse position in the center of the stereo stage, and some of its components (echo, reverberation, and/or additional vocal tracks) are panned toward the sides. These wider components have a lower amplitude than the main voice. W5: The main voice and its associated components (echo, reverberation, and/or additional vocal tracks) are panned across the stereo stage. All components have a similar amplitude. §.§ Metadata The following metadata is provided for each song in the dataset: 0.4cm * Index number: From 1 to 331 * Year of first appearance on Billboard “Hot 100” year-end charts * Chart position: As it appears on the Billboard “Hot 100” year-end charts * Song title: As it appears on the Billboard “Hot 100” year-end charts * Name of artists: As it appears on the Billboard “Hot 100” year-end charts * Collaboration type: * Lead/featured: Collab. with lead artist(s) and featured artist(s) * No lead/featured: Collab. with no determined lead * DJ/vocals: Collab. between a DJ and vocalist(s) * Gender of artists: * Men: Collab. between two or more men * Women: Collab. between two or more women * Mixed: Collab. between two or more artists of different genders * Collaboration type + gender: * Collab M: Collab. between men, no determined lead * Collab M and W: Collab. between men and women, no determined lead * Collab NB and W: Collab. betwen women and non-binary artists, no determined lead * Collab W: Collab. between women, no determined lead * DJ with M: Collab. between male DJ and male vocalist * DJ with Mix: Collab. between male DJ and mixed-gender vocalists * DJ with NB: Collab. between male DJ and non-binary vocalist * DJ with W: Collab. between male DJ and female vocalist * M ft. M: Men featuring men * M ft. W: Men featuring non-binary artist(s) * W ft. M: Women featuring men * W ft. W: Women featuring women * MusicBrainz URL: Link to the song on open music encyclopedia MusicBrainz Each song in the dataset is labeled with an index number from 1 to 331. Songs are numbered in reverse chronological order, beginning with the 2019 charts and ending with 2010. One annotator obtained the metadata on year, chart position, title, and artists from the information available on the Billboard charts. Within years, songs are organized according to their position on the chart, from highest to lowest. Some songs appear on the charts two years in a row. In such cases, we only include the data for the earliest appearance. §.§ Corpus Statistics The dataset can be divided into three categories (shown in Figure <ref>): (i) collaborations between the lead artist(s) and featured artist(s), which account for 221, or 66.7% of the tracks, (ii) collaborations with no determined lead or featured artist, which account for 59, or 17.8%, of the tracks, and (iii) collaborations between a DJ and a vocalist, which account for 51, or 15.4% of the tracks. In category (i), the lead artist usually performs the majority of vocals. For example, in “No Limit” (2018) by G-Eazy ft. A$AP Rocky & Cardi B, G-Eazy performs most of the vocals. A$AP Rocky accompanies him in the chorus and Cardi B raps the second verse. In category (ii), the performance of the vocals is often more equally distributed. Such collaborations are often billed as “duets,” and the artists’ names are separated by a “+”, a “&”, or a comma on the Billboard charts. For example,“Something’ Bad” (2014) is labeled as a “Miranda Lambert Duet With Carrie Underwood.” Both vocalists perform approximately equal portions of the song. In category (iii), the DJ does not provide vocals. In “Sweet Nothing” (2012), for instance, only the featured Florence Welch sings. The voice of DJ Calvin Harris is not heard. Mixed-gender collaborations (including any combination of non-binary, women, and men artists) frequently appear on the Billboard charts and account for 162, or 49%, of the tracks in the dataset. Collaborations between two or more men account for 159 tracks, or 48% of the dataset. Finally, collaborations between women account for 10, or 3%, of the tracks. In six of the ten years under study–2011, 2012, 2015, 2017, 2018, and 2019–no collaborations between women reached the Billboard “Hot 100” year-end chart. Conversely, songs with two or more male vocalists were a consistent fixture on the charts. Mixed-gender collaborations, with any combination of men, women, and non-binary artists within the same track, also frequently appear on the charts. Figure <ref> shows the number and type of sections performed by individual artists in the corpus, categorized according to gender. This figure includes identical sections (such as choruses) that are repeated within a song. Sections in which more than one artist performs are not included. More sections are performed by men than by women and non-binary artists, which is to be expected given the over-representation of men in the dataset as a whole (Figure <ref>). Figure <ref> displays the number and type of sections performed by featured artists only. § EXPERIMENT: VOCAL PRODUCTION FEATURES AND GENDER This section examines the relationship between the gender of an artist and the treatment of their voice, as characterized by three of the annotated musical features in the dataset: Environment, Layering, and Width. For the purposes of statistical power in the experiment, only songs with men and/or women artists were included. We only included tracks that contained verse and chorus sections to remove section types that occur in only a few tracks. In order to avoid over-representations of tracks with repeated sections (i.e., several instances of the same chorus), we sampled the first verse and chorus performed by a single artist from each track.[If the first verse of a song was performed by two artists simultaneously, while the second verse was only performed by one, we sampled the second verse.]This method resulted in the inclusion of two sections from 287 of the 331 dataset tracks in the experiment. We analyzed the data with three separate logistic regressions–one for each feature–using the statsmodels package in Python. We encoded the different levels of the parameter scales (defined in Section 3.1) with one-hot encoding in order to allow us to examine whether there is a correspondence between specific parameter scale levels and gender. Of the three logistic regressions, Environment (R2McFadden (4, N = 574) = 0.028, p < 0.0001) and Width (R2McFadden (4, N = 574) = 0.035, p < 0.0001) were statistically significant, while Layering (R2McFadden (4, N = 574) = 0.0036, p = 0.64) was not. The McFadden R2 values for both Environment and Width were very low. This was not surprising since we did not anticipate that these features, particularly in isolation, would be explanatory. We were instead interested in exploring whether there is a significance between these features with respect to the man/woman gender binary in these collaborations. For Environment, there were significant effects (p < 0.0001) for E1 (=-1.18, 95%CI [-1.49, -0.87]), E2 (=-1.12, 95%CI [-1.56, -0.69]), and E3 (=–0.78, 95%CI [-1.14, -0.42). There was a significant negative effect for the lower/mid-level environment values and gender, meaning that men's voices are more likely to be set in less reverberant spaces than women's voices. For Width, there were significant effects at all of the levels: W1 (=-1.84, 95%CI [-2.50, -1.17]), W2 (=-1.58, 95%CI [-2.39, -0.77]), W3 (=–1.13, 95%CI [-1.51, -0.75]), W4 (=-0.47, 95%CI [-0.77, -0.17), and W5 (=-0.60, 95%CI [-0.95, -0.25]). The Width results are harder to interpret than the Environment ones because the coefficient values are smaller and all negative. This is likely due to the imbalance between men and women in featured artist roles, both in the dataset (see Figure <ref>) overall and in the sample used in this experiment (404 of the included sections featured men while only of 170 featured women). However, the overall trend is similar to the one in the Environment experiment: lower-level values are more common for men than women. Men's voices are more likely to occupy a narrow, centered position on the stereo stage, while women's voices are more likely to occupy a wider space. These results were expected given that high Environment values tend to be associated with high Width values, as the reverberated components of a voice are generally panned across the stereo stage. The lack of significant results for Layering indicates that there are no differences in the ways in which this parameter is applied to men's and women's voices. Since textural variation (such as the addition of vocal layers) is a standard feature of verse-chorus form, it is possible that Layering is linked to the type of formal section rather than to the gender of the vocalist. The significant results for the Environment and Width parameters can be interpreted in light of Brøvig-Hanssen's and Danielsen's work on technological mediation<cit.>. The authors establish a distinction between transparent and opaque technological mediation in recorded music. Transparent mediation, on one hand, is meant to create a recorded product that sounds natural and unaltered. Low Environment and Width values, for instance, are closer to transparent mediation because they sound closer to a real-life performance that is unmediated with artificial reverb or panning. Opaque mediation, on the other hand, highlights the use of technology by making it obvious to the listener. High Width and Environment values, with their clearly audible artificial reverberation and wide panning, are examples of opaque mediation. The results of the experiment therefore suggest that men’s voices are more likely to be mixed to sound “transparent” and natural while women’s voices are more likely to be mixed to sound “opaque” and technologically mediated. Overall, this experiment demonstrates that within verse and chorus sections in CoSoD, there is a significant difference between the treatment of men's and women's vocals in terms of Environment and Width. This suggests that some mixing parameters contribute to the sonic differentiation of men’s and women’s voices in popular music. § CONCLUSION CoSoD is a 331-song corpus of all multi-artist collaborations for faciliating appearing on the 2010–2019 Billboard “Hot 100” charts. Each song in the dataset is annotated with metadata, formal sections, and aspects of vocal production (including reverberation, layering, panning, and gender of the artists). As outlined in Section 2, CoSoD has several implications for MIR research. It provides annotated data for structural segmentation tasks and a listener-centered perspective on vocal mixing that could be useful for automatic music mixing tasks. The dataset could also be used to determine how these parameters interact with song form. Further study could also examine the relationship between the vocal range of an artist in a given section, their type of vocal delivery (rapped, spoken, or sung), and mixing parameters. Finally, the dataset also allows for the examination of the ways in which Environment, Layering, and Width values tend to be grouped together to create specific vocal production effects. The dataset also facilitates musicological study of multi-artist collaborations post-2010 and gender norms. The experiment in Section 4 demonstrates this, as its results suggest that, for the chorus and verse data sampled from 287 songs in the dataset, men's voices are more likely to be narrow and less reverberated than women's. Opportunities for future research include examining whether there is a significant difference in the way Environment, Width, Layering, or other parameters are applied to women's and men's voices within collaborations that feature mixed- and same-gender vocalists. In other future work, we plan on expanding the annotations in the dataset with time-aligned lyrics, harmonic analyses, and additional performance data for the voice extracted using AMPACT <cit.>. These annotations will include both spectral features and semantic descriptors, and the data will be encoded in relation to vocal-line transcriptions, where possible <cit.>. We also plan on providing annotations on vocal production parameters in sections performed by multiple artists and examining how vocal production parameters correlate with mixing parameters such as panning. Finally, while our dataset focuses on gender, we are also interested in encoding other aspects of identity, such as race, in order to provide an intersectional perspective on artists' identities. However, categorizing artists according to race proves to be more problematic than gender. Matthew D. Morrison writes that “white (and other nonblack) people freely express themselves through the consumption and performance of commodified black aesthetics without carrying the burden of being black under white supremacist structures” <cit.>. In other words, white and non-Black artists–such as rappers Iggy Azalea and G-Eazy, or singer Bruno Mars–often assume particular sonic characteristics that implicitly associate them with commodified notion of Blackness. By categorizing all white artists together, for instance, we would ignore this phenomenon and the way it is sonically realized. Further work needs to be done to understand how to best expand on CoSoD, or datasets in general, to account for this dynamic.
http://arxiv.org/abs/2307.05659v1
20230711164153
Probing the Quantitative-Qualitative Divide in Probabilistic Reasoning
[ "Duligur Ibeling", "Thomas Icard", "Krzysztof Mierzewski", "Milan Mossé" ]
math.LO
[ "math.LO", "cs.LO" ]
Effective Whitney Stratification of Real Algebraic Varieties Vidit Nanda August 12, 2023 ============================================================= This paper explores the space of (propositional) probabilistic logical languages, ranging from a purely `qualitative' comparative language to a highly `quantitative' language involving arbitrary polynomials over probability terms. While talk of qualitative vs. quantitative may be suggestive, we identify a robust and meaningful boundary in the space by distinguishing systems that encode (at most) additive reasoning from those that encode additive and multiplicative reasoning. The latter includes not only languages with explicit multiplication but also languages expressing notions of dependence and conditionality. We show that the distinction tracks a divide in computational complexity: additive systems remain complete for 𝖭𝖯, while multiplicative systems are robustly complete for ∃ℝ. We also address axiomatic questions, offering several new completeness results as well as a proof of non-finite-axiomatizability for comparative probability. Repercussions of our results for conceptual and empirical questions are addressed, and open problems are discussed. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1656518. § INTRODUCTION For as long as probability has been mathematized, numbers and numerical calculation have been at the center. From the treatment of probability in the Port Royal Logic in terms of ratios of frequencies, to the modern axiomatic treatment of Kolmogorov, it has always been standard to formulate probabilistic reasoning in fundamentally quantitative terms. It may be surprising that the systematic mathematical analysis of more qualitative probabilistic notions is relatively recent—beginning in earnest only after Kolmogorov's landmark treatise <cit.>—and still occupies not much more than a footnote in the history of the subject.[<cit.> referred to comparative probability as a `neglected concept', and while the work by Fine and others inspired substantial subsequent development, it is still arguably only a marginal part of the field.] This is despite the fact that qualitative probabilistic locutions are of ancient origin, predating the various numerical concepts by millennia (see ), and despite the insistence by many that some of the qualitative notions are somehow primary. In the words of <cit.>, qualitative comparisons like `more likely than' are an expression of `the primordial intuition of probability', while the use of numbers is `a mathematical construct derived from the latter under very special conditions' (p. 269). Similar attitudes were expressed earlier by Keynes, de Finetti, and by many more authors since. §.§ Why Qualitative? Aside from inherent interest of codifying and systematizing these putatively more basic or fundamental types of judgments, a number of other purported advantages have been adduced for qualitative formulations of probability. For theorists interested in the measurement of psychological states, it has been argued that comparative judgments provide a more sound empirical basis for elicitation than explicit numerical judgments. To quote <cit.>, `The intuitive idea of using a comparative qualitative relation is that individuals can realistically be expected to make such judgments in a direct way, as they cannot when the comparison is required to be quantitative' (p. 18). Comparative judgments also appear to be more reliable and more stable over time, for example, compared to point estimates. Even when the elicited comparative judgments may be represented by a numerical function, such a numerical representation will be `a matter of convention' chosen for reasons of `computational convenience' <cit.>. A second rationale for considering qualitative formulations is that they may be seen as more fundamental mathematically, in part by virtue of their flexibility. <cit.> suggests, `The qualitative approach provides a powerful method for the scrutinization and revelation of underlying assumptions of probability theory, is a link to empirical probabilistic concerns, and is a point of departure for the formulation of alternative probabilistic concepts' (p. 143). Indeed, it is easy to construct orderings on events that satisfy some of the same principles as standard numerical probability while violating others. For instance, possibility theory violates finite additivity <cit.>, imprecise probabilities violate comparability <cit.>, and so on. In the infinite case, qualitative orders may accommodate intuitions that elude any reasonable quantitative model (see, e.g., ). For those who judge the Kolmogorov axiomatization to be appropriate only for a limited range of applications, this generality and flexibility offers a theoretical advantage. Finally, a third rationale is that qualitative systems may be in some respect simpler, a vague sentiment expressed in nearly all works on the topic. There is indeed something intuitive in the idea that reasoning about a (not necessarily total) order on a space of events is easier than reasoning about measurable functions to the real unit interval. §.§ Probing the Distinction Whereas a distinction between quantitative and qualitative probabilistic reasoning seems to be ubiquitous, it is not entirely clear what the distinction is exactly. In the present work we adopt a logical approach to this and related questions. By advancing our technical understanding of a large space of probabilistic representation languages, we aim to clarify meaningful ways this distinction might be drawn, and more generally to elucidate further the relationships between specifically probabilistic and broader numerical reasoning patterns. Uncontroversial exemplars of both qualitative and quantitative systems can be identified. At one extreme, a paradigmatically qualitative language is that of comparative probability. This language, which we call ℒ_comp,[Such a language was formulated in explicit logical terms first by <cit.> and by <cit.>, but can be traced back at least to <cit.>.] involves only basic comparisons 𝐏(α) ≿𝐏(β) with the intuitive meaning, `α is at least as likely as β'. We take as a paradigmatically quantitative language one that allows comparison of arbitrary polynomials (sums and products) over probability terms, a language we call ℒ_poly.[Such a language appeared first in <cit.>, and then later in <cit.>.] While ℒ_comp is uncontroversially qualitative and (over finite spaces) vastly underdetermines numerical content, ℒ_poly is manifestly quantitative and is capable of describing probability measures at a relatively fine level of grain. In between these two extremes is a large space of probabilistic representation languages, essentially differing in how much numerical content they can encode. What are the natural classifications of this space? We identify one particularly robust classification based on the amount of arithmetic a system (implicitly) encodes, namely the simple distinction between additive and (additive)-multiplicative systems. Thanks to the Boolean structure of events, even ℒ_comp can already codify substantial additive reasoning. One can also add explicit addition to this language (e.g., as in ). However, much of probabilistic reasoning appeals to notions of (in)dependence and conditionality, which seem to involve not just additive but also multiplicative patterns. This would include (purportedly qualitative) conditional comparisons like (α|β) ≿ (γ|δ), as studied by <cit.>, as well as seemingly even simpler constructs like (qualitative) confirmation, whereby `β confirms α' just in case (α|β)≻α (e.g., ). A first hint that this arithmetical classification is meaningful comes from the observation that the additive systems always admit an interpretation in rational numbers, which in turn facilitates a natural alternative interpretation in terms of concatenation on strings. By contrast, even the simplest multiplicative systems can easily force irrational numbers. For instance, if (A ∧ B)≈ (A ∧ B), then for (A|B) ≈ B to hold as well, B must have probability 1/√(2). §.§ Overview of Results One of our main results is that this distinction between additive and multiplicative systems is matched by a demarcation in computational complexity. The satisfiability problem for additive systems is seen to be complete for 𝖭𝖯-time, thus no harder than the problems of Boolean satisfiability or integer programming. With any modicum of multiplicative reasoning, by contrast, the satisfiability problem becomes complete for the class ∃ℝ, conjectured to be harder than 𝖭𝖯. This classification is surprisingly robust, encompassing the most minimal languages encoding qualitative dependence notions, all the way to the largest system we consider, ℒ_poly, allowing arbitrary addition and multiplication. Within each of these classes—the `purely' additive and the additive-multiplicative—we find a common distinction between systems that only allow comparisons between atomic probability terms and those that admit explicit arithmetical operations over terms. Indeed, on the additive side, ℒ_add augments ℒ_comp with the ability to sum probabilities. As just mentioned, this involves no increase in complexity. However, it does lead to a difference in reasoning principles, and in particular axiomatizability. Drawing on work of <cit.>, <cit.>, and <cit.>, we show that ℒ_comp is not finitely axiomatizable. In stark contrast, we present a new finite axiomatization of ℒ_add that is seen to be simple and intuitive. The work on ℒ_add is then adapted for ℒ_comp, to give a new completeness argument for the powerful polarization rule <cit.>, which has been used to supplant Scott's infinitary finite cancellation schema <cit.>. Rather than deriving the finite cancellation axioms and then appealing to Scott's representation result, we show directly how a variable elimination method can be adapted to show completeness. From a logical point of view, these results together suggest that disallowing explicit addition might be seen as an artificial restriction: reasoning in ℒ_add can always be emulated within ℒ_comp, at the expense of temporarily expanding the space of events. At no cost in computational complexity, ℒ_add codifies the relevant reasoning principles in simple, intuitive axioms. A similar pattern is seen to arise in the multiplicative setting, for systems that fall into the ∃ℝ-complete class. Here we consider two languages, ℒ_poly and the conditional comparative language ℒ_cond alluded to above. The latter involves comparisons of the form 𝐏(α | β) ≿𝐏(γ | δ), with the interpretation `α is at least as likely given β as is γ given δ'. We give a very intuitive finite system for the former language ℒ_poly by presenting a multiplicative annex to our axiomatization for ℒ_add. Varying an argument from <cit.>, where a polynomial language permitting subtraction (via unary negation) was considered,[See also <cit.> in which an (intuitively) even wider polynomial system with explicit rational quantities was strongly axiomatized via an infinitary proof rule.] we show this system complete. The argument turns on a Positivstellensatz of semialgebraic geometry. As for the latter language ℒ_cond, we review pertinent work by <cit.>. §.§ Roadmap We begin by defining several probabilistic languages, including ℒ_comp, ℒ_add, ℒ_cond, and ℒ_poly; showing that these languages form an expressivity hierarchy; and introducing the notions from computational complexity used throughout the paper (§<ref>). We then consider the additive systems ℒ_add and ℒ_comp, proving the soundness and completeness of axiomatizations of both languages (§§<ref>, <ref>), discussing issues of finite axiomatizability (§<ref>), and rehearsing results that characterize the complexity of reasoning in these systems (§<ref>). We turn next to the multiplicative systems, providing an axiomatization of the language ℒ_poly (§<ref>), investigating axiomatic questions for the language ℒ_cond (§<ref>), and characterizing the complexity of reasoning in these multiplicative systems, including a minimal logic ℒ_ind allowing only for Boolean combinations of equality and independence statements (§<ref>). In §<ref>, we summarize the results of this discussion: both in the additive and multiplicative settings, systems with explicitly `numerical' operations are more expressive and admit finite axiomatizations, while incurring no cost in complexity; this does not seem to be the case for `purely comparative' systems (as we showed in the additive case, and conjecture for the case of conditional comparative probability). Finally, in §<ref>, we critically discuss various ways of understanding the distinction between qualitative and quantitive probability logics, before concluding in <ref> with some open questions. § A SPACE OF PROBABILISTIC REPRESENTATION LANGUAGES In this section, we define the syntax and semantics of the additive and multiplicative languages which are the primary focus of the paper's discussion, and we illustrate that these languages form an expressivity hierarchy. We also introduce the key notions from computational complexity that are used to characterize the satisfiability problems of these languages. §.§ Syntax and Semantics Fix a nonempty set of proposition letters 𝖯𝗋𝗈𝗉, and let σ(𝖯𝗋𝗈𝗉) be all Boolean combinations over 𝖯𝗋𝗈𝗉. We will be interested in terms 𝐏(α) for α∈σ(𝖯𝗋𝗈𝗉), which will be standardly interpreted as the probability of α. We first define several sets of probability terms, next define languages of comparisons between terms, and finally provide a semantics for these languages: Define sets of terms using the following grammars: 𝖺∈ T_uncond 𝖺 := 𝐏(α) for any α∈σ(𝖯𝗋𝗈𝗉) 𝖺∈ T_cond 𝖺 := 𝐏(α |β) for any α,β∈σ(𝖯𝗋𝗈𝗉) 𝖺∈ T_add 𝖺 := 𝐏(α) | 𝖺 + 𝖻 for any α∈σ(𝖯𝗋𝗈𝗉) 𝖺∈ T_quad 𝖺 := 𝐏(α)·𝐏(β) for any α,β∈σ(𝖯𝗋𝗈𝗉) 𝖺∈ T_poly 𝖺 := 𝐏(α) | 𝖺 + 𝖻 | 𝖺·𝖻 for any α∈σ(𝖯𝗋𝗈𝗉). Define an operator Λ that, for each set of terms T, generates a language of comparisons in those terms: φ∈Λ(T) φ = 𝖺≿𝖻 | φ | φψ for , ∈ T. Define ℒ_comp = Λ(T_uncond) and ℒ_add = Λ(T_add). Define ℒ_cond = Λ(T_cond), ℒ_quad = Λ(T_quad), and ℒ_poly = Λ(T_poly). Define two further languages: φ∈ℒ_indφ = 𝖺 = 𝖻 | 𝖺𝖻 | φ | φψ for any 𝖺, 𝖻∈ T_uncond φ∈ℒ_confirmφ = 𝐏(α|β) ≿𝐏(α) | 𝐏(α) ≿𝐏(α|β) | 𝐏(α) = 𝐏(β) | φ | φψ for any α,β∈σ(𝖯𝗋𝗈𝗉) We assume that 0 is an abbreviation for 𝐏(), where is any Boolean contradiction, and likewise 1 is an abbreviation for 𝐏(⊤). We let 𝗍≈𝗍' abbreviate 𝗍≿𝗍'∧𝗍'≿𝗍; and let 𝗍≻𝗍' abbreviate 𝗍≿𝗍'∧𝗍'≿𝗍. Some generally valid axioms and rules (call these principles `core' probability logic) appear in Fig. <ref>. Note that reflexivity of ≿ and non-negativity, 𝐏(α)≿0, both follow from 𝖣𝗂𝗌𝗍. These axioms and rules will be part of every system we study. Call them 𝖠𝖷_base. A model is a probability space 𝔐 = (Ω, ℱ, ℙ,·), such that ·: σ(𝖯𝗋𝗈𝗉) →℘(Ω), with A∈ℱ for each A ∈𝖯𝗋𝗈𝗉. It follows that α∈ℱ for all α∈σ(𝖯𝗋𝗈𝗉). The denotation 𝐏(α)^𝔐 of a basic probability term 𝐏(α) is defined to be ℙ(α), while the donations of complex terms 𝖺 + 𝖻 and 𝖺·𝖻 are defined in the usual recursive manner. We define truth of basic inequality statements in the obvious way: 𝔐𝖺≿𝖻 𝖺^𝔐≥𝖻^𝔐, while Boolean combinations are evaluated as usual. In ℒ_cond, ℒ_ind, and ℒ_confirm, we specify their atomic clauses separately: 𝔐𝐏(α | β)≿𝐏(β | δ) ℙ(αβ) ℙ (δ) ≥ℙ(βδ) ℙ(β) 𝔐𝐏(α) 𝐏(β) ℙ(α∧β) = ℙ(α)ℙ(β) 𝔐𝐏(α|β) ≿𝐏(α) ℙ(α∧β) ≥ℙ(α)ℙ(β) 𝔐𝐏(α) ≿𝐏(α|β) ℙ(α∧β) ≤ℙ(α)ℙ(β). Equality statements and rational terms are evaluated as expected. §.§ An Expressivity Hierarchy The languages introduced in the preceding section form an expressive hierarchy. Indeed, ℒ^comp is less expressive than both ℒ^add and ℒ^cond, both of which are less expressive than the language ℒ^poly. When we say that one language is less expressive than another, we mean that no statement in the less expressive language distinguishes two models which can be distinguished by some statement in the more expressive language. Drawing arrows from less expressive languages to more expressive ones, the hierarchy can be shown graphically: [vertex] (a) at (0,0) ℒ_comp; [vertex] (b1) at (2,1) ℒ_ind; [vertex] (b2) at (4,1) ℒ_confirm; [vertex] (b3) at (6,1) ℒ_cond; [vertex] (b4) at (8,1) ℒ_quad; [vertex] (c1) at (5,-1) ℒ_lin; [vertex] (d) at (10,0) ℒ_poly; [edge] (a) – (b1); [edge] (a) – (c1); [edge] (b1) – (b2); [edge] (b2) – (b3); [edge] (b3) – (b4); [edge] (b4) – (d); [edge] (c1) – (d); The languages introduced in the preceding section form an expressivity hierarchy. For a formula φ in any of these languages, let Mod(φ) = {𝔐: 𝔐φ} be the class of its models. For two languages ℒ_1 and ℒ_2, we say that ℒ_2 is at least as expressive as ℒ_1 if for every φ∈ℒ_1 there is some ψ∈ℒ_2 such that Mod(φ) = Mod(ψ). We say ℒ_2 is strictly more expressive than ℒ_1 if ℒ_2 is at least as expressive as ℒ_1 but not vice versa. Two languages are incomparable in expressivity if neither is at least as expressive as the other. Figure <ref> illustrates the expressivity hierarchy among the languages introduced above. In particular, ℒ_comp is less expressive than both ℒ_add and ℒ_cond, both of which are less expressive than the language ℒ_poly. In this section, we provide examples to establish the above-pictured hierarchy, with each segment between a pair of languages indicating a strict increase in expressivity. In particular, we will show that ℒ_comp is strictly less expressive than both ℒ_add and ℒ_cond, and that both of these are strictly less expressive than the language ℒ_poly. In fact, in all but one case we show something stronger: namely that (fixing Ω, ℱ, and ·) there are two measures _1 and _2 which are indistinguishable in the less expressive language but which can be distinguished by some statement in the more expressive one. This stronger result is not possible in the case of ℒ_add and ℒ_poly, because of the following: Any distinct measures ℙ_1, ℙ_2 are distinguishable in ℒ_add. If ℙ_1, ℙ_2 are distinct measures, without loss of generality, ℙ_1(α) < n/m < ℙ_2(α) for some natural numbers n and m. Thus ℙ_1(α) added to itself m times is at most ℙ_1(⊤) added to itself n times, while this is not true of ℙ_2; thus ℙ_1, ℙ_2 are distinguishable in ℒ_add. Additive systems. First, we observe that ℒ_comp is less expressive than ℒ_add. Let ℙ_1(A) = 2/3 and ℙ_2(A) = 3/5. The order on the events A, A, ⊤, induced by these measures is the same, but, for instance, ℙ_1(A) = ℙ_1( A)+ℙ_1( A), while ℙ_2(A) ≠ℙ_2( A)+ℙ_2( A). To show that ℒ_add is less expressive than ℒ_poly, we simply identify a formula φ∈ℒ_poly such that there is no ψ∈ℒ_add with Mod(φ) = Mod(ψ). For this we can take the example mentioned in the introduction: ℙ(A ∧ B)=ℙ( A ∨ B) and ℙ(A|B) = ℙ(B). (This is in fact expressible already in ℒ_cond.) As mentioned above, this enforces that ℙ(B)= 1/√(2), while it follows from Corollary <ref> below that every formula in ℒ_add has models in which every probability is rational. Multiplicative systems. First, we show that ℒ_comp is no more expressive than ℒ_ind. Define the measures _1(A B) = 25/36, _1(A B) = _1( A B) = 5/36, _1( A B) = 1/36; _2(A B) = 27/36, _2(A B) = _2( A B) = 4/36, _2( A B) = 1/36. Then _1(A B) = _1(A) _1(B), while _2(A B) ≠_2(A) _2(B), so that the measures are distinguishable in ℒ_ind. However, for i ∈{1,2} we have _i(A B) > _i(A) = _i(B) > _i(A B) > _i( A B) > _i(A B)= _i( A B) > _i( A B), so that the measures _1 and _2 are not distinguishable in ℒ_comp. Next, we show that ℒ_ind is less expressive than ℒ_confirm. Define the measures _1(A B) = 23/36, _1(A B) = _1( A B) = 6/36, _1( A B) = 1/36; _2(A B) = 27/36, _2(A B) = _2( A B) = 4/36, _2( A B) = 1/36. The above measures satisfy the same order satisfied by the measures in the preceding example, and A and B are not independent under either measure, so the measures are indistinguishable in ℒ_ind. However, ℙ_1(A |B ) < ℙ_1(A), while ℙ_2(A |B ) > ℙ_2(A), so that the measures are distinguishable in ℒ_confirm. Then, note that ℒ_confirm and ℒ_comp are incomparable in expressivity. The following are ℒ_confirm-equivalent: _1(A B) = 1/9, _1(A B) = 1/3, _1( A B) = 5/9, _1( A B) = 0; _2(A B) = 1/9, _2(A B) = 5/9, _2( A B) = 1/3, _2( A B) = 0. These measures are, however, evidently distinguishable in ℒ_comp. A fortiori, they are distinguishable in ℒ_cond. Thus this example also shows that ℒ_confirm is strictly less expressive than ℒ_cond. After that, we show that ℒ_cond is less expressive than ℒ_quad. Let α,β,γ be mutually unsatisfiable events. Define _1(α)= 3/20, _1(β)= 4/20, _1(γ)=13/20, while _2(α)=3/20 - .03, _2(β)=4/20 - .01, _2(γ)=13/20 + .04. One can verify by exhaustion that all comparisons of conditional probabilities agree between _1 and _2, thus they are indistinguishable in ℒ_cond. At the same time, there are statements in ℒ_quad in which the models differ. For example, _1(γ)_1(β) < _1(α), whereas _2(γ)_2(β) > _2(α). Finally, we show that ℒ_quad is less expressive than ℒ_poly. Defining ℙ_1(A) = 2/3 and ℙ_2(A) = 3/4 we find that for i ∈{1,2} ℙ_i(A) > ℙ_i(A)^2 > ℙ_i( A) > ℙ_i(A) ·ℙ_i( A) > ℙ_i( A)^2, while ℙ_1 (A)^3 < ℙ_1( A) and ℙ_2 (A)^3 > ℙ_2( A), so that the measures are not distinguishable in ℒ_quad but are distinguishable in ℒ_poly. Summarizing the results of this section: Figure <ref> describes an expressivity hierarchy; each language is less expressive than any higher-up language to which it is path-connected and incomparable in expressivity to all other languages. In words, the language ℒ_comp is less expressive than ℒ_add, which is again less expressive than ℒ_poly. Similarly, the languages ℒ_comp, ℒ_cond, ℒ_quad, and ℒ_poly form an expressivity hierarchy, with each language in the series less expressive than the one that follows it, as do the languages ℒ_ind, ℒ_confirm, and ℒ_cond. The languages ℒ_ind and ℒ_confirm are incomparable in expressivity with the languages ℒ_comp and ℒ_add. §.§ Complexity In this subsection, we introduce the ideas from complexity theory needed to state some of the paper's results. We denote by _ℒ the satisfiability problem for ℒ. A polynomial-time, deterministic reduction from one decision problem A to another decision problem B is a deterministic Turing machine M, such that a ∈ A if and only if M(a) ∈ B, with M(a) computing in a number of steps polynomial in the length of the binary input a. We write A ≤ B when there exists such a reduction.. In particular, when there is a polynomial-time map from ℒ_1 to ℒ_2 which preserves and reflects satisfiability, we write _ℒ_1≤_ℒ_2. An -reduction from A to B is a nondeterministic Turing machine M, such that a ∈ A if and only if at least one of the outputs M(a) is in B, with each output M(a) computing in a number of steps polynomial in the length of the binary input a.[Equivalently, one can think of an -reduction as a deterministic reduction M^', provided with a polynomial-sized “certificate” or “guess,” which specifies which (if any) of the non-deterministic paths of M will lead to an output M(a) in B: then, M^' “verifies” this path, correctly producing an output M^'(a) ∈ B if and only if a ∈ A.] When each member of a collection 𝒞 of decision problems can be reduced via some deterministic, polynomial-time map to a particular decision problem A, one says that the problem is 𝒞-hard; if in addition, A ∈𝒞, then A is 𝒞-complete. The class 𝒞 of decision problems is called a complexity class. A complexity class 𝒞 is closed under 𝖭𝖯-reductions if whenever there is an -reduction from A to B ∈𝒞, then in addition A ∈𝒞. We are concerned here with two complexity classes which are closed under -reductions <cit.>: and . We discuss each in turn. The Class . The class contains any problem that can be solved by a non-deterministic Turing machine in a number of steps that grows polynomially in the input size. Hundreds of problems are known to be -complete, among them Boolean satisfiability and the decision problems associated with several natural graph properties, for example possession of a clique of a given size or possession of a Hamiltonian path. See <cit.> for a survey of such problems and their relations. The Class . The Existential Theory of the Reals (ETR) contains all true sentences of the form there exist x_1,...,x_n ∈ℝ satisfying 𝒮, where 𝒮 is a system of equalities and inequalities of arbitrary polynomials in the variables x_1,...,x_n. For example, one can state in ETR the existence of the golden ratio, which is the only root of the polynomial f(x) =x^2 - x -1 greater than one, by `there exists x >1 satisfying f(x) = 0.' The decision problem of saying whether a given formula φ∈ ETR is complete (by definition) for the complexity class . The class is the real analogue of , in two senses. Firstly, the satisfiability problem that is complete for features real-valued variables, while the satisfiability problems that are complete for typically feature integer- or Boolean-valued variables. Secondly, and more strikingly, <cit.> showed that while is the class of decision problems with answers that can be verified in polynomial time by machines with access to unlimited integer-valued memory, is the class of decision problems with answers that can be verified in polynomial time by machines with access to unlimited real-valued memory. As with , a myriad of problems are known to be -complete. We include some examples that illustrate the diversity of such problems: * In graph theory, there is the -complete problem of deciding whether a given graph can be realized by a straight line drawing <cit.>. * In game theory, there is the -complete problem of deciding whether an (at least) three-player game has a Nash equilibrium with no probability exceeding a fixed threshold <cit.>. * In geometry, there is the -complete `art gallery' problem of finding the smallest number of points from which all points of a given polygon are visible <cit.>. * In machine learning, there is the -complete problem of finding weights for a neural network and some training data such that the total error is below a given threshold <cit.>. For discussions of further -complete problems, see <cit.> and <cit.>. The inclusions ⊆⊆𝖯𝖲𝖯𝖠𝖢𝖤 are known, where 𝖯𝖲𝖯𝖠𝖢𝖤 is the set of decision problems solvable using polynomial space; it is an open problem whether either inclusion is strict. §.§ Notation We denote probability measures by ℙ and formal logical symbols for such measures by 𝐏. We use A, B, C to denote propositional atoms; Greek minuscule α, β, γ, δ, ϵ, ζ to denote propositional formulas over such atoms; sans-serif 𝖺, 𝖻, 𝖼, etc. to denote terms (elements of the various T_*) in probabilities of such formulas; and φ, ψ, χ to denote formulas comparing such terms (viz. formulas of the ℒ_*). At various points in the paper we rely on the following definition: For a set 𝒜⊂𝖯𝗋𝗈𝗉 of proposition letters, let Δ_𝒜 = {⋀_A ∈𝒜ℓ_A : ℓ_A ∈{ A, ¬ A } for each A } be the set of formulas providing complete state descriptions of 𝒜. We simply write Δ where the set 𝒜 is clear from context. Often, we will take 𝒜 to be the set of proposition letters appearing in a formula φ, in which case we write Δ_φ instead of Δ_𝒜. For example, if φ is the formula 𝐏(A B) > 𝐏(A), then Δ_φ = {A B, A B, A B, A B}. § ADDITIVE SYSTEMS §.§ Positive Linear Inequalities Our first task is to provide an axiomatization of the language ℒ_add. Aside from the basic principles of 𝖠𝖷_base (Fig. <ref>), we have the following additivity axiom: 𝖠𝖽𝖽. 𝐏(α) ≈𝐏(α∧β) + 𝐏(α∧β) We also have core axioms for dealing with addition. The system 𝖠𝖷_add is shown in Figure <ref>.[A version of the axiom 2𝖢𝖺𝗇𝖼 appears in the textbook by <cit.>, under the name double cancellation (p. 250).] A number of further principles are easily derivable in 𝖠𝖷_add, which we record in the following lemma, with suggestive names: The following all follow in 𝖠𝖷_add: 𝖭𝗈𝗇𝖭𝗎𝗅𝗅. 𝖺≿0 𝖱𝖾𝖿𝗅. 𝖺≿𝖺 𝖬𝗈𝗇𝗈. 𝖺+𝖻≿𝖺 1𝖢𝖺𝗇𝖼. 𝖺+𝖼≿𝖻+𝖼↔𝖺≿𝖻 𝖣𝗎𝗉𝗅. 𝖺+𝖺≿𝖻+𝖻↔𝖺≿𝖻 𝖢𝗈𝗆𝖻. (𝖺≿𝖻∧𝖼≿𝖽) →𝖺+𝖼≿𝖻+𝖽 𝖲𝗎𝖻1. 𝖾+𝖼≈𝖺→ (𝖺+𝖽≿𝖻+𝖼↔𝖾+𝖽≿𝖻) 𝖲𝗎𝖻2. 𝖾+𝖼≈𝖺→ (𝖻+𝖼≿𝖺+𝖽↔𝖻≿𝖾+ 𝖽) 𝖤𝗅𝗂𝗆. (𝖾+𝖺≻𝖻∧𝖼≻𝖾+𝖽) →𝖺+𝖼≻𝖻+𝖽 𝖱𝖾𝗉𝗅. 𝖺≈𝖻→ (φ↔φ^𝖺_𝖻) where φ^𝖺_𝖻 is the result of replacing some instances of 𝖺 by 𝖻. The main result of this subsection is a completeness proof for 𝖠𝖷_add. Unlike existing completeness arguments for additive probability logics (such as that in ), the proof here proceeds solely on the basis of a variable elimination argument. 𝖠𝖷_add is sound and complete. Soundness is routine. For completeness, we show that the system is strong enough to carry out a variation on the Fourier-Motzkin elimination method for solving linear inequalities. By 𝖣𝗂𝗌𝗍 and 𝖠𝖽𝖽, we can assume that in every probability term 𝐏(δ), the formula δ is a (canonical) complete state description over finitely many propositional atoms, or else a contradiction. Thus we can assume that for every two probability terms 𝐏(δ) and 𝐏(γ) appearing in the formula, δ and γ are logically inconsistent. Their values are therefore constrained only by the restrictions explicitly implied by the formula, and by the fact that their sum must be greater than 0. By 𝖭𝗈𝗇𝖣𝖾𝗀 we can assume that our formula is conjoined with the (derivable) statement ∑_δ𝐏(δ) ≻0, since the two will be interderivable. In other words, our formula will be satisfiable iff the corresponding linear system—replacing each 𝐏(δ) with a distinct variable x and 𝐏() with 0—has a solution. Note that it suffices to consider (un)satisfiability over solutions in ℚ^+, the non-negative rationals, as we can always normalize to obtain a solution in [0,1] corresponding to a probability measure. Our strategy will be as follows. Suppose φ is valid. We want to show that we can transform φ into an equisatisfiable formula ψ, such that ⊢_𝖠𝖷_addφ→ψ. Because the sentence ψ will have a particularly simple form, we will be able to tell easily that its negation is derivable. It will follow at once (by Boolean reasoning) that ⊢_𝖠𝖷_addφ. Assume that φ is in disjunctive normal form, and consider any disjunct, which we can assume is a conjunction of equality statements (≈) and strict inequality statements (≻). Pick any `variable' x = 𝐏(δ). We want to show how x can be eliminated from each conjunct in a way that leads to an equisatisfiable formula that is also derivable from the previous formula. By principles 1𝖢𝖺𝗇𝖼 and 𝖣𝗎𝗉𝗅 (together with 𝖫𝗂𝗇, 𝖠𝗌𝗌𝗈𝖼, and 𝖢𝗈𝗆𝗆) we can assume without loss a fixed k>0, such that each conjunct containing x has one of the following forms: 3 * kx + 𝖺≈𝖻 * kx + 𝖺≻𝖻 * 𝖻≻ kx+ 𝖺 where kx is an abbreviation for the k-fold sum of x, and where x does not appear anywhere in terms 𝖺 or 𝖻. If there is no 𝖺, simply let it be 0, admissible by 𝖹𝖾𝗋𝗈. To show that we can eliminate x altogether, consider the following cases: * There is at least one conjunct of type <ref>. In this case, principles 𝖲𝗎𝖻1 and 𝖲𝗎𝖻2 allow eliminating x from all but one conjunct of type <ref>, as well as all conjuncts of types <ref> and <ref>. It remains only to show that x can be eliminated from the last equality kx+𝖺≈𝖻. Since x appears nowhere else in the conjunct, the whole formula will be equisatisfiable with the result of replacing kx+𝖺≈𝖻 with 𝖻≿𝖺. Moreover, the latter is derivable from the former by transitivity of ≿ and using 𝖢𝗈𝗆𝗆 and 𝖬𝗈𝗇𝗈. * There are conjuncts of both types <ref> and <ref>. This case is handled by principle 𝖤𝗅𝗂𝗆. For each pair kx + 𝖺≻𝖻 and 𝖼≻ kx+ 𝖽, we include a new conjunct 𝖺+𝖼≻𝖻+𝖽, which does not involve x. The resulting formula will be equisatisfiable. * There are only conjuncts of type <ref>. Such a formula is always satisfiable (over the positive rationals), so we can simply replace each of them with any tautology. * There are only conjuncts of type <ref>. In this case the equisatisfiable transformation replaces each instance 𝖻≻ kx+ 𝖺 with 𝖻≻𝖺. The latter can be derived from the former by transitivity, 𝖢𝗈𝗆𝗆, and 𝖬𝗈𝗇𝗈. After the last variable has been eliminated by repeated application of the above rules, we will be left with a conjunction of (in)equalities in which every term is a sum of 0s. By 𝖹𝖾𝗋𝗈, we can assume every conjunct is of the form 0≿0 or 0≿0. Unsatisfiability implies that 0≿0 must be a conjunct. But 0≿0 is provable by 𝖣𝗂𝗌𝗍. Thus, any unsatisfiable formula is refutable in 𝖠𝖷_add. §.§ Comparative Probability The pure comparative language ℒ_comp is just like ℒ_add, but without explicit addition over probability terms. Thus, while we can take 𝖠𝖷_base as a basis for axiomatization, none of the remaining axioms of 𝖠𝖷_add are in the language, most saliently, the additivity axiom 𝖠𝖽𝖽. §.§.§ Quasi-Additivity Early in the development of modern probability, <cit.> proposed an intuitive principle, subsequently called quasi-additivity (see, e.g., ):[It will be convenient to omit the explicit reference in ℒ_comp to probability operators, simply writing α≿β in place of 𝐏(α)≿𝐏(β). For ℒ_add we do not omit it.] 𝖰𝗎𝖺𝗌𝗂. α≿β↔ (α∧β) ≿ (β∧α) Some authors also refer to 𝖰𝗎𝖺𝗌𝗂 as qualitative additivity, and it has been argued that this principle constitutes the `hard core for the logic of uncertain reasoning' <cit.>. It was famously shown in <cit.> that 𝖰𝗎𝖺𝗌𝗂 is insufficient to guarantee a probabilistic representation, falling short of full additivity. However, there is an important sense in which this axiom does capture additivity. First, observe that 𝖰𝗎𝖺𝗌𝗂 is equivalent to the following variant, where δ is a Boolean expression incompatible with both γ and θ: 𝖰𝗎𝖺𝗌𝗂'. (γ∨δ) ≿ (θ∨δ) ↔γ≿θ To see this, note that 𝖰𝗎𝖺𝗌𝗂 emerges as the special case of 𝖰𝗎𝖺𝗌𝗂' when γ = (α∧β), θ=(β∧α), and δ = (α∧β). In the other direction, letting α = (γ∨δ) and β = (θ∨δ), we obtain (γ∨δ)≿ (θ∨δ) ↔ ((γ∨δ) ∧ (θ∨δ)) ≿ ((θ∨δ) ∧( γ∨δ)) ↔ (γ∧θ) ≿ (θ∧γ) ↔ γ≿θ, where the first and third equivalences are instances of 𝖰𝗎𝖺𝗌𝗂 and the second follows from 𝖣𝗂𝗌𝗍. The following observations identify a strong respect in which 𝖰𝗎𝖺𝗌𝗂 truly captures additivity. As long as we are dealing with incompatible Boolean expressions, 𝖰𝗎𝖺𝗌𝗂 facilitates all of the same reasoning patterns as the core principles of 𝖠𝖷_add: Suppose α, β, γ, δ, ε, and ζ are all pairwise unsatisfiable. Then following patterns are derivable from 𝖠𝖷_base+𝖰𝗎𝖺𝗌𝗂: 2𝖢𝖺𝗇𝖼𝖰. ((α∨ε)≿ (γ∨ζ) ∧ (β∨ζ)≿ (δ∨ε)) → (α∨β) ≿ (γ∨δ) 𝖢𝗈𝗇𝗍𝗋𝖰. ((α∨β)≿(γ∨δ) ∧δ≿β) →α≿γ Consider first 2𝖢𝖺𝗇𝖼𝖰. If (α∨ε)≿ (γ∨ζ), then by 𝖰𝗎𝖺𝗌𝗂' we have (α∨ε∨β) ≿ (γ∨ζ∨β). Meanwhile, from (β∨ζ)≿ (δ∨ε), again using 𝖰𝗎𝖺𝗌𝗂' we have (β∨ζ∨γ) ≿ (δ∨ε∨γ). By 𝖣𝗂𝗌𝗍 and transitivity of ≿ we obtain (α∨β∨ε) ≿ (γ∨δ∨ε), and finally by one last application of 𝖰𝗎𝖺𝗌𝗂' we derive (α∨β) ≿ (γ∨δ). For 𝖢𝗈𝗇𝗍𝗋𝖰: if δ≿β then by 𝖰𝗎𝖺𝗌𝗂', (γ∨δ) ≿ (γ∨β). From (α∨β) ≿ (γ∨δ) and transitivity we derive (α∨β) ≿ (γ∨β). From one more application of 𝖰𝗎𝖺𝗌𝗂' we conclude α≿γ. Indeed, the reader is invited to check that all of the patterns recorded in Lemma <ref> are derivable under the same restriction. For instance, 1𝖢𝖺𝗇𝖼 simply becomes 𝖰𝗎𝖺𝗌𝗂'. For another example, here is a version of 𝖣𝗎𝗉𝗅: Suppose α, β, γ, and δ are all pairwise unsatisfiable. Then every instance of the following is derivable from 𝖠𝖷_base+𝖰𝗎𝖺𝗌𝗂: 𝖣𝗎𝗉𝗅𝖰. (α≈β∧γ≈δ) →((α∨β)≿ (γ∨δ) ↔α≿γ). Suppose α≈β and γ≈δ. Then if α≿γ, by several applications of transitivity we know that β≿δ. Applying 2𝖢𝖺𝗇𝖼𝖰 from Lemma <ref> (letting ε = ζ = ⊤) we obtain (α∨β)≿ (γ∨δ). In the other direction, suppose (α∨β) ≿ (γ∨δ) but that α≿γ fails. Then from our assumption above, using transitivity and Boolean reasoning, β≿δ must also fail. By comparability of ≿ it follows that δ≿β. But then 𝖢𝗈𝗇𝗍𝗋𝖰 from Lemma <ref> implies α≿γ, a contradiction. Limitations arise from the case where we cannot simulate addition with disjunction, when Boolean expressions are not incompatible. There are two approaches to circumvent the problem, each leading to a different axiomatization of the logic of comparative probability. The first and most canonical way, emerging from classical work in the theory of comparative probability orders, relies on introducing an infinitary axiom scheme which captures precisely the probabilistic representability of a comparative probability order on a Boolean algebra of events. The second one relies on introducing the Polarization rule, a powerful proof rule proposed by <cit.>. As we show in section <ref>, adding the quasi-additivity axiom and the polarisation rule to 𝖠𝖷_base gives an alternative axiomatization for ℒ_comp. §.§.§ Finite Cancellation and probabilistic representability The most common approach to axiomatizing comparative probability crucially relies on a representation theorem for comparative probability orders, due to <cit.> and <cit.>. Faced with the inadequacy of de Finetti's quasi-additivity principle to guarantee probabilistic representation, these authors proposed an infinite list of axioms, often called the finite cancellation axioms. Here we follow subsequent modal-logical formulations of the axioms due to <cit.>. Given two lists of n Boolean formulas α_1,…,α_n,β_1,…,β_n, we can consider the set Δ of all state descriptions, treating these Boolean formulas as atoms. We call a state description δ∈Δ balanced if the same numbers of α_i's as β_i's is (not) negated in δ. Let ℬ⊂Δ be the set of all balanced state descriptions. Now define the following abbreviation: (α_1,…,α_n)≡_0(β_1,…,β_n) def= (⋁_δ∈ℬδ)≈⊤. For all n, and all pairs of sequences of n formulas, we have an instance of 𝖥𝗂𝗇𝖢𝖺𝗇_n: 𝖥𝗂𝗇𝖢𝖺𝗇_n. ( (α_1,…,α_n)≡_0(β_1,…,β_n) ∧α_1≿β_1 ∧…∧α_n-1≿β_n-1) →β_n ≿α_n. Semantically, it is easy to see that the balancedness condition (α_1,…,α_n)≡_0(β_1,…,β_n) amounts to the property that ∑_i≤ n1_α_i = ∑_i≤ n1_β_i, where 1_α is the indicator function of the set α. Accordingly, given a sample space Ω and a set algebra ℱ, we say that two sequences of events A_1,..., A_n and B_1,..., B_n ∈ℱ are balanced if ∑_i≤ n1_A_i = ∑_i≤ n1_B_i. A semantic formulation of the finite cancellation axioms is then the following: 𝖥𝗂𝗇𝖢𝖺𝗇_n. If (A_i)_i≤ n and (B_i)_i≤ n are balanced and ∀ i< n, A_i≽ B_i, then B_n≽ A_n. The two sequences being balanced means that every element of the sample space belongs to exactly as many A_i's as B_i's. Under this description, the soundness of 𝖥𝗂𝗇𝖢𝖺𝗇_n is straightforward: balancedness ensures that the sums ∑_i(A_i) and ∑_i(B_i) are equal, since they are computed by taking the exact same sums of terms (ω) for ω∈Ω: this is inconsistent with having (A_i)≥(B_i) for all i with some of these inequalities strict. (Alternatively, to show soundness we can prove that 𝖥𝗂𝗇𝖢𝖺𝗇_n is derivable in 𝖠𝖷_add: see Appendix.) The significance of the finite cancellation scheme stems from the following theorem. Say that a relation ≽ on a Boolean algebra ℱ is probabilistically representable if there exists a probability measure ℙ on ℱ such that, for all A,B∈ℱ, A≽ B if and only if ℙ(A)≥ℙ(B). We have: Let (Ω, ℱ) a finite set algebra and ≽ a binary relation on ℱ. There is a probability measure ℙ on (Ω, ℱ) representing ≽ if and only if the following hold for all A, B ∈ℱ: 𝖳𝗈𝗍. ≽ is a reflexive total order; 𝖭𝗈𝗇𝖣𝖾𝗀. ∅⋡Ω; 𝖭𝗈𝗇𝖳𝗋𝗂𝗏. A≽∅; 𝖥𝗂𝗇𝖢𝖺𝗇_n. If (A_i)_i⩽ n and (B_i)_i⩽ n are balanced and ∀ i< n, A_i≽ B_i, then B_n≽ A_n. The result is proved by appeal to general results in linear algebra or linear programming (see, e.g., or ). It is worth sketching the proof here in order to highlight the algebraic content of the finite cancellation rule, as well as to emphasize the key differences between the additive and multiplicative systems we investigate below: particularly, the distinct tools and proof techniques involved. To prove the right-to-left direction of Theorem <ref>, we first formulate the task as an algebraic problem. Consider an order ≽ on events in the algebra (Ω,ℱ) satisfying the properties listed above. Take the vector space ℝ^n. Each event A is identified with the vector A of its indicator function: that is, the vector (v_1,…,v_n) where v_i= A(ω_i). Finding a measure representing ≽ amounts to finding a linear functional Φ:ℝ^n→ℝ with the property that Φ (A)≥Φ(B) iff A≽ B. The map A↦Φ(A) can then be seen as a (non-normalised) additive measure on (Ω,ℱ). Note that the linearity of Φ ensures that Φ(∅)=Φ(0)=0. Further, 𝖭𝗈𝗇𝖣𝖾𝗀 and 𝖭𝗈𝗇𝖳𝗋𝗂𝗏 ensure that Φ(Ω) > 0 and Φ(A)≥ 0 for all A∈ℱ. Importantly, additivity is also guaranteed: when A∩ B≠∅, we have A∪ B=A+B in ℝ^n, and by the linearity of Φ we have Φ(A∪ B)= Φ(A+B) = Φ(A)+Φ(B). This means that we can define the desired probability measure in the obvious way: set (A):=Φ(A)/Φ(Ω). To find such an order-preserving linear functional, we can appeal to the following Lemma, due to <cit.>: Let V a finite-dimensional real vector space, and (M,≽) a finite relational structure with M⊆ V and M a set of vectors with coordinates in ℚ. Then there exists a linear functional Φ:V→ℝ satisfying w≽v⇔Φ(v)≽Φ(w) if and only if (a) ∀w,v∈ M, v≽w or w≽v (b) if ∑^n_i=1v_i = ∑^n_i=1w_i and ∀ i<n, v_i≽w_i then v_n≼w_n. We obtain the desired functional by applying the Lemma to the structure (M, ≽), where M={A | A∈ℱ}, and we lift the ≽ relation to M by setting A≽B if and only if A≽ B. In particular, note that property (b) of the Lemma, when applied to vectors of the form v_i= A_i and w_i = B_i, corresponds precisely to the finite cancellation axiom scheme 𝖥𝗂𝗇𝖢𝖺𝗇_n. In this way, Theorem <ref> is established. In order to get a better grasp on the algebraic content the finite cancellation axioms, it is informative to consider the following simple geometric description of the problem. We want a linear functional Φ to have the property that Φ(A-B)≥ 0 if A≽ B, and Φ(A-B) < 0 if A⋡B. Each linear functional on ℝ^n can be written in the form Φ(𝐯)=𝐰^T𝐯 for some vector 𝐰. This means that, geometrically, finding a linear functional of the desired kind amounts to finding a hyperplane separating (the cones generated by) the sets {A-B | A≽ B} and {A-B | A⋡B}. Given such a hyperplane with normal 𝐰, we have that 𝐰^T(A-B)≥ 0 for all A, B such that A≽ B (the angle between 𝐰 and (A-B) is right or acute) and 𝐰^T(A-B)< 0 for all A, B such that B≻ A (the angle between (A-B) and w is obtuse). We thus need to solve the following system of linear inequalities for w∈ℝ^n. w^T (A-B) ≥ 0 for all A,B∈ℱ such that A≽ B w^T (A-B) < 0 for all A,B∈ℱ such that B≻ A A well-known theorem of the alternative (; see also ) states that a system like the above fails to have a solution only if there exists an (integer-valued) certificate of infeasibility. A certificate of infeasibility for the linear system given here translates into the existence of two balanced sequences of events which violate an instance of 𝖥𝗂𝗇𝖢𝖺𝗇_n. The finite cancellation conditions ensure the nonexistence of certificates of infeasibility for the system of linear inequalities expressed by the order ≽. With Theorem <ref> at hand, one can show by standard logical methods that adding the infinite axiom scheme 𝖥𝗂𝗇𝖢𝖺𝗇_n to 𝖠𝖷_base yields a complete axiomatization of ℒ_comp-validities (together with a simple extensionality axiom). ℒ_comp is completely axiomatized by 𝖠𝖷_base together with the following: 𝖥𝗂𝗇𝖢𝖺𝗇_n. ( (α_1,…,α_n)≡_0(β_1,…,β_n) ∧α_1≿β_1 ∧…∧α_n-1≿β_n-1) →β_n ≿α_n. 𝖤𝗑𝗍. ((α_1↔α_2 ≿⊤) ∧ (β_1↔β_2 ≿⊤)) →((α_1≿β_1) → (α_2≿β_2)) Three points are worth noting. First, the techniques required for proving the completeness result belong entirely to the standard toolkit of linear algebra. We see how this canonical axiomatization of ℒ_comp is based on hyperplane separation methods. ℒ_comp can only express linear constraints on the representing probability measure; the axiom schemes ensure precisely the consistency of the systems of linear inequalities that the language can consistently express. As we will see, this will no longer hold in any of the multiplicative systems we will consider, which can express polynomial constraints: there, proving completeness will require showing that the system is powerful enough to prove the consistency of certain systems of polynomial inequalities. Thus the study of multiplicative probability logics involves techniques from semialgebraic geometry. Second, in this case the linear functional in the representation theorem can in fact always be taken to be rational-valued: in other words, no constraints expressible in ℒ_comp can force the probability of an event to have an irrational value. This entails that any consistent formula in ℒ_comp has a model where the probabilities are all rational. As a consequence, one can show that ℒ_comp is sound an complete with respect to finite counting models, in which α≽β holds exactly if more states in the model satisfy α than β <cit.>. This is not in general the case for multiplicative systems: as we saw above, the ability to express polynomial constraints can force irrational probabilities for some events. Lastly, we saw this canonical axiomatization is infinite. In the next section, we will show that one can avoid the infinite cancellation scheme by enriching our base system with a powerful proof rule. In Section <ref>, we will show that, without such a strong proof rule, an infinite axiom scheme is unavoidable: within the basic rules of system 𝖠𝖷_base, the logic ℒ_comp is not finitely axiomatizable. §.§.§ Polarization Intuitively, if we could only `duplicate' formulas whenever we want to add probabilities for overlapping events, this would license the same reasoning capacities as with linear inequalities. Such a proof rule was introduced by <cit.>, following <cit.>. Suppose A is a proposition letter that occurs nowhere in α or φ. Then the polarization rule says: 𝖯𝗈𝗅𝖺𝗋𝗂𝗓𝖾. (α∧ A)≈(α∧ A) →φφ. The soundness of 𝖯𝗈𝗅𝖺𝗋𝗂𝗓𝖾 is straightforward to show (see ): if φ is satisfiable, it suffices to show that φ can be satisfied together with (α∧ A) ≈ (α∧ A). This is achieved by duplicating α, the extension of α, and ensuring that all A-free formulas are thereby preserved. Completeness, however, is less straightforward. Existing treatments show how the infinitary schema 𝖥𝗂𝗇𝖢𝖺𝗇_n can be derived from 𝖯𝗈𝗅𝖺𝗋𝗂𝗓𝖾 (cf. ); as we saw in <ref>, completeness of the infinitary system in turn depends on additional facts from linear algebra. Here we can give a more direct argument, showing exactly how polarization, together with de Finetti's quasi-additivity axiom, recapitulates the additive reasoning for variable elimination that we gave for 𝖠𝖷_add (Theorem <ref>). Let 𝖠𝖷_comp consist of the axioms and rules of 𝖠𝖷_base, plus 𝖰𝗎𝖺𝗌𝗂 and 𝖯𝗈𝗅𝖺𝗋𝗂𝗓𝖾 (Fig. <ref>). Consider some finite set 𝒜 of propositional atoms and suppose A ∉𝒜. For a formula φ over 𝒜, define the relativization φ^A to be the result of replacing every inequality ε≿ζ in φ with (ε∧ A) ≿ (ζ∧ A). Let π be the formula: π def= ⋀_δ∈Δ(δ∧ A) ≈ (δ∧ A), where Δ is the set of state descriptions over 𝒜. Then we have: ⊢_𝖠𝖷_compπ→ (φ↔φ^A). Consider any inequality ε≿ζ appearing in φ. The result will follow by Boolean reasoning if we can just show that ε≿ζ↔ (ε∧ A) ≿ (ζ∧ A) follows from π. By 𝖣𝗂𝗌𝗍 we can assume that ε and ζ are disjunctions of state descriptions over 𝒜. First observe that (ε∧ A) ≈ (ε∧ A) follows from π, and the same for (ζ∧ A) ≈ (ζ∧ A), using 𝖣𝗂𝗌𝗍 and multiple instances of 𝖣𝗎𝗉𝗅𝖰 (from Lemma <ref>). But then by another application of 𝖣𝗎𝗉𝗅𝖰, where α = (ε∧ A), β = (ε∧ A), γ = (ζ∧ A), and δ = (ζ∧ A), and 𝖣𝗂𝗌𝗍 once again, we derive ε≿ζ↔ (ε∧ A) ≿ (ζ∧ A). 𝖠𝖷_comp is sound and complete. The proof strategy is to follow the derivation of φ from 𝖠𝖷_add, showing how to transform this into a derivation from 𝖠𝖷_comp using polarization. Roughly speaking, the idea is to replace sums of probability terms with probabilities of disjunctions; polarization is used to ensure that the disjuncts can be mutually incompatible, by furnishing a sufficient number of `copies' of each disjunct. As we saw above, quasi-additivity is sufficiently strong when reasoning about incompatible disjuncts. As in the proof of Theorem <ref>, by 𝖣𝗂𝗌𝗍 we assume that for every inequality α≿β appearing in φ, both α and β are disjunctions involving the finitely many state-descriptions δ∈Δ over the propositional atoms 𝖯𝗋𝗈𝗉^φ that occur in φ. Because φ is also in ℒ_add, the proof of Theorem <ref> furnishes a derivation of φ. Let m be the maximum factor that appears anywhere in the derivation, that is, the largest number of times any term 𝐏(δ) is added to itself. We introduce n=⌈_2 m ⌉ fresh proposition letters 𝖯𝗋𝗈𝗉^+ = {A_1,…,A_n}. As the method below will mimic the previous derivation—and in particular will not introduce more factors—n atoms, and thus 2^n ≥ m state descriptions over those atoms, suffices. Define Δ_k to be the state descriptions over 𝖯𝗋𝗈𝗉^φ∪{A_1,…,A_k}, so in particular, Δ_0 = Δ, and Δ_n is the set of state descriptions over 𝖯𝗋𝗈𝗉^φ∪𝖯𝗋𝗈𝗉^+. We can now relativize φ n times, producing φ^* = (… (φ^A_1)…)^A_n. And where π^* def= ⋀_k< n⋀_δ∈Δ_k (δ∧ A_k+1) ≈ (δ∧ A_k+1), multiple applications of Lemma <ref> allow us to conclude: ⊢_𝖠𝖷_compπ^* → (φ↔φ^*). Thus, under the assumption π^*, it suffices just to derive φ^*. Observe furthermore that we now essentially have m copies of each state description δ over 𝖯𝗋𝗈𝗉^φ, each copy `tagged' by a distinct state description σ over 𝖯𝗋𝗈𝗉^+. The conjunction δ∧σ is in fact an element of Δ_n, i.e., a state description over 𝖯𝗋𝗈𝗉^φ∪𝖯𝗋𝗈𝗉^+. It is straightforward to show that 𝖠𝖷_comp proves π^* → (δ∧σ_i) ≈ (δ∧σ_j) for each δ∈Δ and all σ_i≠σ_j; that is, every pair of elements of Δ_n that agree on 𝖯𝗋𝗈𝗉^φ are provably equiprobable. Because σ_i≠σ_j we also know that all pairs δ∧σ_i, δ∧σ_j are jointly unsatisfiable. Thus, suppose that φ^* is valid, that φ^* is in disjunctive normal form, and consider any disjunct, a conjunction of equality statements (≈) and strict inequalities (≻) between disjunctions of relativized state descriptions; that is, each (in)equality is between disjunctions of conjunctions δ∧⋀_i≤ nA_i, where δ∈Δ. It remains to show that each step of the variable elimination (or `δ-elimination') argument from Theorem <ref> can be emulated here. Our aim is to eliminate the `variable' x = (δ∧σ_1), where σ_1 = ⋀_i ≤ nA_i. Let kx stand for any k-fold disjunction (δ∧σ_i_1) ∨…∨ (δ∧σ_i_k), analogous to the k-fold sum 𝐏(δ)+…+𝐏(δ) in the proof of Theorem <ref>. At the start we will always have k=1, but as we proceed through the elimination of variables some will appear with greater multiplicity (but again, no greater than m). Thus, in general, every conjunct containing x will have one of the following three forms: 3 * kx ∨α≈β * kx ∨α≻β * β≻ kx ∨α where kx, α, and β are all mutually incompatible Boolean formulas. Here we are using 𝖣𝗂𝗌𝗍, 𝖣𝗎𝗉𝗅𝖰, and 𝖰𝗎𝖺𝗌𝗂', which also allow us to assume k is the same across conjuncts containing x. Note also that either of α or β can be the empty disjunction . We now carry out the same case distinctions as in the proof of Theorem <ref>: * There are only conjuncts of type <ref>. In this case we can simply replace each instance β≻ kx ∨α with β≻α, which results in an equisatisfiable formula. The latter can be derived from the former by 𝖣𝗂𝗌𝗍 and transitivity of ≿. * There are only conjuncts of type <ref>. The entire disjunct will then make no difference to satisfiability, so we can replace it with any tautology. * There are conjuncts of both types <ref> and <ref>. This case is handled by the quasi-additive version[That is, ((ε∨α) ≻β∧γ≻ (ε∨δ)) → (α∨γ) ≻ (β∨δ).] of 𝖤𝗅𝗂𝗆. For each pair kx ∨α≻β and γ≻ kx∨δ, we want to replace it with a new conjunct α∨γ≻β∨δ, which does not involve x. But this is only guaranteed to produce an equisatisfiable result if α and γ do not share disjuncts (and the same for β and δ). If α and γ share a disjunct, then we let γ' be just like γ but with a distinct σ_i for that disjunct. Because all such copies of the disjunct are provably equiprobable, we have γ≈γ', and thus γ' ≻ kx ∨δ. Performing any necessary analogous replacement to obtain δ' in addition, the result, (α∨γ') ≿ (β∨δ'), in place of kx ∨α≻β and γ≻ kx∨δ—again, derivable from the latter by the variant of 𝖤𝗅𝗂𝗆—will give an equisatifiable transformation of the original formula, now without any appearance of x. * There is at least one conjunct of type <ref>. In this case, the quasi-additive versions[To wit: (ε∨γ)≈α→ ((α∨δ)≿(β∨γ) ↔ (ε∨δ)≿β) and (ε∨γ)≈α→ ((β∨γ)≿(α∨δ) ↔β≿(ε∨δ)).] of 𝖲𝗎𝖻1 and 𝖲𝗎𝖻2 allow eliminating x from every other conjunct of type <ref>, as well as all conjuncts of types <ref> and <ref>. As in the previous case, we may need to use duplicates of state descriptions, but this can be done in the very same manner. It remains to show that x can be eliminated from the last equality kx ∨α≈β. Since x appears nowhere else in the conjunct, the whole formula will be equisatisfiable with the result of replacing kx ∨α≈β with β≿α. The latter is derivable from the former by 𝖣𝗂𝗌𝗍 and transitivity of ≿. Finally, after eliminating all variables, we will end up with a conjunction of statements each provably equivalent (by 𝖣𝗂𝗌𝗍) to either ≿ or ≿. Unsatisfiability of φ^* means the latter must be a conjunct, but this formula is refutable in 𝖠𝖷_comp. Consequently φ^* is derivable, and by (<ref>), φ is itself derivable, assuming π^*. That is, we have shown that ⊢_𝖠𝖷_compπ^* →φ. Because φ does not involve any of the new atoms in 𝖯𝗋𝗈𝗉^+, we can iteratively discharge the assumption of π^* by 𝖯𝗈𝗅𝖺𝗋𝗂𝗓𝖾, conjunct by conjunct from (<ref>). §.§ Finite Axiomatizability We saw that the canonical axiomatization of the logic of comparative probability ℒ_comp features infinitely many axiom schemes. The finite cancellation axioms feature a separate axiom scheme φ_n (α_1,...α_k_n) for each n∈ℕ, where the α's range over the Boolean formulas. By contrast, observe that the system 𝖠𝖷_add for the logic of (explicitly arithmetical) additive comparisons is finitely (scheme-)axiomatizable, in the sense that it is given by a finite set of axiom schemes over 𝖠𝖷_base: that is, the axiomatization features only finitely many axiom schemes of the form φ_n (α_1,...α_k_n), where the α's range over the Boolean formulas, and finitely many axiom schemes ψ_n (𝗍_1,...𝗍_k_n), where the 𝗍_i's range over the terms in the language. This notion of finite axiomatizability—axiomatizabilty by finitely many schemes—is the natural one to consider in our propositional setting. The standard axiomatizations of finite comparative probability structures in the literature are given by schemes of this form, with an implicit universal quantification over events. For ℒ_comp, finite (scheme-)axiomatizability in our sense corresponds to the finite axiomatizability of comparative probability structures, over finite structures,[Recall that a class of structures 𝕂 is axiomatized by Γ over finite structures if 𝕂=Mod(Γ) ∩𝖥𝗂𝗇: that is, 𝕂 is exactly the class of finite models of Γ.] by a universal sentence in a first-order language with quantification over events (or finite axiomatizability tout court, if we include a uniform substitution rule in our system). Here we show that, for the less expressive language ℒ_comp, the presence of infinitely many schemes is inevitable. As opposed to explicitly `arithmetical' system 𝖠𝖷_add, the logic of ℒ_comp is not finitely axiomatizable in that sense. Unless we enrich the system with powerful additional inference rules like we did in Section <ref>, no finite set of axiom schemes can capture the ℒ_comp-validities. Our proof proceeds in two steps. We first note that a finite axiomatization of ℒ_comp would result in a finite universal axiomatization, over finite structures, of the class of comparative probability orders in the first-order language of ordered Boolean algebras. We then appeal to a variant of a theorem of <cit.> to show that comparative probability orders are not finitely axiomatizable by a universal sentence over finite structures. The construction appeals to combinatorial results by <cit.> on finite cancellation axioms. §.§.§ Vaught's theorem and finite axiomatizability We work with the language ℒ_𝖡𝖠∪{≿} of Boolean algebras with an additional binary relation. The signature ℒ_𝖡𝖠 is given by (0, 1, ⊗, ⊕, ·^) where the constant symbols 0 and 1 stand for the bottom and top element, the binary function symbols ⊗ and ⊕ stand for the Boolean meet and join operations, respectively, and the unary function symbol ·^ stands for Boolean complementation. Consider the following class of ℒ_𝖡𝖠∪{≿}-structures. The class 𝖥𝖢𝖯 of finite comparative probability structures consists of all finite Boolean algebras with a representable comparative probability order: i.e., all structures of the form 𝒜=(A, 0_𝒜,1_𝒜, ⊗, ⊕, ·^, ≿) where (A, 0_𝒜,1_𝒜, ⊗, ⊕, ·^) is a Boolean algebra, and the order ≿ on A is representable by a probability measure, in that there exists some probability measure on 𝒜 such that, for all a,b∈ A, we have a≿ b if and only if (a)≥(b). We can naturally translate each φ in ℒ_comp into a universally quantified φ in ℒ_𝖡𝖠∪{≿}: we assign a variable p_i^∗:= x_i to each atomic p_i, and extend it in the obvious way so that each Boolean expression β gets assigned a corresponding term β^∗∈ℒ_𝖡𝖠.[We let (β)^∗ := (β^∗)^, (α∧β)^∗ := α^∗⊗β^∗ and (α∨β)^∗ := α^∗⊕β^∗.] For each term 𝐏(β), we have 𝐏(β)^∗ := β^∗, and each formula φ∈ℒ_comp is translated into a quantifier free φ^∗∈ℒ_𝖡𝖠∪{≿}.[Let (t_1≿ t_2)^∗ :=t_1^∗≿ t_2^∗, (φ∧ψ)^∗ := φ^∗∧ψ^∗, (φ∨ψ)^∗ := φ^∗∨ψ^∗, and (φ)^∗:=φ^∗.] Now take the translation that assigns, to each φ∈ℒ_comp, the formula φ := ∀ x_1...∀ x_nφ^∗ where p_1,...,p_n are all the atomic propositions occurring in φ. Then φ is valid on all probability models if and only if 𝖥𝖢𝖯φ. In particular, a scheme φ(α_1,…,α_n) is valid on all probability models if and only if 𝖥𝖢𝖯φ. Observe also that, given Theorem <ref>, the class 𝖥𝖢𝖯 is axiomatized over finite structures by universal sentences, by taking standard universal axioms for Boolean algebras and adding the translations of the infinitely many 𝖥𝗂𝗇𝖢𝖺𝗇_n axiom schemes (as well as the 𝖠𝖷_base axioms). To show that the logic of comparative probability ℒ_comp is not finitely axiomatizable over 𝖠𝖷_base, it suffices to show that 𝖥𝖢𝖯 is not axiomatizable over finite structures by a single universal sentence in ℒ_𝖡𝖠∪{≿}. For suppose there was a finite collection of schemes Δ={ψ_1,...,ψ_k} such that every ℒ_comp-validity over probability models followed from Δ. Then Δ:={ψ_1,…, ψ_k}⊂ℒ_𝖡𝖠∪{≿} would finitely axiomatize the universal theory of 𝖥𝖢𝖯 over finite structures. But 𝖥𝖢𝖯 being universally axiomatizable, this would amount to an axiomatization of 𝖥𝖢𝖯 by a single universal sentence (over finite structures). We will now show that 𝖥𝖢𝖯 is not axiomatizable by a universal sentence over finite structures. We recall one useful model-theoretic definition that will be needed. A class 𝕂 of first-order structures is uniformly locally finite if there exists a function f:ℕ→ℕ such that for any 𝔐∈𝕂 and any subset {a_1,…,a_n}⊆dom(𝔐), we have |𝔐⟨a⃗⟩| ≤ f(n), where 𝔐⟨a⃗⟩ is the substructure of 𝔐 generated by {a_1,…,a_n}. We will make use of the following (minor variant of) Vaught's characterisation of structures axiomatizable by a universal sentence <cit.>. Let 𝕂 a uniformly locally finite class of structures in a finite first-order signature ℒ. If 𝕂 is axiomatizable by a universal sentence over finite structures, then (i) 𝕂 is closed under substructures (ii) There is some n∈ℕ such that, for any finite structure 𝔄, if every substructure 𝔅⊆𝔄 of size at most n belongs to 𝕂, then 𝔄∈𝕂. (i) is immediate. Let 𝖥𝗂𝗇 the class of finite structures. For (ii), suppose 𝕂=Mod(φ)∩𝖥𝗂𝗇 for some φ∈ℒ of the form φ = ∀ x_1…∀ x_kψ(x_1,...,x_k), with ψ quantifier-free. Since 𝕂 is uniformaly locally finite, there is some f:ℕ→ℕ such that, for any 𝔄∈𝕂, any substructure of 𝔄 generated by k elements has size at most f(k). Take n=f(k), and we show that this n satisfies (ii). Take a finite structure 𝔄∉𝕂. We show 𝔄 contains a substructure of size at most f(k) which is not in 𝕂. We have 𝔄φ, so ∃a̅=(a_1,...,a_k)∈𝔄 with 𝔄ψ[a̅]. Consider 𝔄⟨a̅⟩, the substructure of 𝔄 generated by {a_1,...,a_k}. By definition of f, we have |𝔄⟨a̅⟩|≤ f(k). Since ψ is quantifier-free, it is preserved under substructures, so 𝔄⟨a̅⟩ψ[a̅], hence 𝔄⟨a̅⟩φ. So 𝔄⟨a̅⟩∉𝕂, which establishes (ii). Note that the class 𝖥𝖢𝖯 of finite Boolean algebras with a representable comparative probability order is locally finite, with the uniform bound on the size of substructures given by f(n)=2^2^n. We will use Proposition <ref> to show that 𝖥𝖢𝖯 is not axiomatizable over finite structures by a universal formula. §.§.§ The logic of comparative probability is not finitely axiomatizable In order to show that 𝖥𝖢𝖯 is not axiomatizable by a universal formula, we appeal to a combinatorial analysis of cancellation axioms in the setting of a restricted class of ordered Boolean algebras, which corresponds to one of the earliest comparative probability structures introduced by <cit.>. A de Finetti order is a structure (ℬ,≿), where ℬ is a Boolean algebra and ≿ a binary relation on ℬ satisfying the following: 𝖳𝗈𝗍. ≽ is a reflexive total order; 𝖭𝗈𝗇𝖣𝖾𝗀. (0≽1); 𝖭𝗈𝗇𝖳𝗋𝗂𝗏. ∀ x (x≿0); 𝖰𝗎𝖺𝗌𝗂. ∀ x_1 x_2( x_1≿ x_2↔( x_1 ⊗ x^_2≿ x_2⊗ x^_1 )). A linear de Finetti order is one where the strict binary relation ≻, defined by a≻ b iff (b ≿ a), is a linear order. A de Finetti order on n atoms is one where the underlying Boolean algebra ℬ is a finite algebra generated by n atoms. Note that the definition of de Finetti orders simply characterizes, in the language of ordered Boolean algebras, comparative orders satisfying the quasi-additivity axiom discussed in Section <ref>.[Recall that, in the plain set-theoretic language of comparative probability orders, quasi-additivity is equivalent to the statement that for any A, B and C such that (A∪ B)∩ C=∅, we have A≿ B if and only if A∪ C≿ B∪ C.] The notion of two sequences (a_1,…, a_n) and (b_1,…, b_n) of elements from ℬ being balanced transfers naturally to the setting of Boolean algebras. Given b∈ℬ and a ℬ-atom c, say that b lies above c if c⊗ b = c. Now define (a_1,…, a_n) and (b_1,…, b_n) to be balanced if, for every atom c in the algebra, the number of a_i's above c equals the number of b_i's above c. We write (a_1,…, a_n)≡_0 (b_1,…, b_n) to express that the two sequences are balanced.[It follows from the formulation in balancedness (<ref>) in Section <ref> that the statement (a_1,…, a_n)≡_0 (b_1,…, b_n) can in fact be expressed in ℒ_𝖡𝖠∪{≿}.] Consider the following property 𝖲_k: (𝖲_k) If (a_i)_i≤ N≡_0 (b_i)_i≤ N are balanced sequences with at most k distinct pairs (a_i, b_i), then it is not the case that a_i≻ b_i for all i≤ N. (𝖲_k) is a variant of the finite cancellation axiom 𝖥𝗂𝗇𝖢𝖺𝗇_k. Note that k here counts the number of distinct premises a_i≻ b_i in the antecedent, and not the number of premises. Observe also that a linear de Finetti order satisfying S_k for all k∈ℕ also satisfies all instances of finite cancellation 𝖥𝗂𝗇𝖢𝖺𝗇_k. Thus, by Theorem <ref>, linear de Finetti orders satisfying all axioms (𝖲_k) belong to 𝖥𝖢𝖯: they are probabilistically representable. One can use the (𝖲_k) axioms to measure `how much' finite cancellation an order needs to satisfy in order to be probabilistically representable. Given a fixed bound on the size of a finite algebra (say, at most n atoms), we can ask: is there some k such that every de Finetti order of this size satisfying S_k is representable? <cit.> investigates such bounds. He defines: f(n):= min{k∈ℕ | every linear de Finetti order on n atoms that satisfies 𝖲_k is representable} Known bounds on f(n) are given by the following: For all n, f(n)≤ n+1. For any m≥ 6, there exists a linear de Finetti order on a Boolean algebra with m atoms that fails 𝖲_m-1, but satisfies 𝖲_m-2. We now use these bounds to prove our desired result. The class 𝖥𝖢𝖯 is not axiomatizable by a universal sentence over finite structures. We show that condition (ii) of Proposition <ref> fails for 𝖥𝖢𝖯. That is, we show that for any n∈ℕ, there exists some finite structure 𝔄 = (𝒜,≿)∉𝖥𝖢𝖯 such that every one of its substructures 𝔅=(ℬ,≿ℬ) of size at most n is in 𝖥𝖢𝖯. Given sufficiently large n, take a linear de Finetti order 𝔄 as given by Proposition <ref> with m≥log_2n + 3. Then 𝔄 is not representable, as it fails 𝖲_m-1. Now take any of its substructures (ℬ,≿ℬ) of size at most n ≤ 2^m-3. Such an algebra has at most m-3 atoms, and it is evidently a linear de Finetti order (note that linear de Finetti orders are axiomatizable by a universal sentence, hence preserved under substructures). Since (𝒜,≿) satisfies 𝖲_i for all i≤ (m-2), so does the substructure (ℬ,≿ℬ): for any violation of 𝖲_i in (ℬ,≿ℬ) would also hold in (𝒜,≿).[Each inequality a_i≻ b_i is obviously preserved under substructures; so whether any instance of 𝖲_i holds only depends on whether all the elements a_i, b_i involved are indeed in the subalgebra.] But now ℬ is an algebra on at most m-3 atoms, and by Proposition <ref> f(m-3)≤ m-2. This means that any linear De Finetti order on m-3 atoms that satisfies 𝖲_m-2 is representable, since it then also satisfies 𝖲_f(m-3). So any 𝔅⊂𝔄 of size at most n is representable. From this we can conclude: ℒ_comp is not finitely axiomatizable over 𝖠𝖷_base. §.§ Complexity of additive systems In this section, we recall well-known results that characterize the complexity of reasoning in the additive systems ℒ_comp and ℒ_add. 𝖲𝖠𝖳_comp and 𝖲𝖠𝖳_add are 𝖭𝖯-complete. We rehearse a proof by Fagin, Halpern, and Megiddo to show that 𝖲𝖠𝖳_add is 𝖭𝖯-complete. This requires two lemmas: If there exists a non-negative solution to a system of m linear inequalities with integer coefficients each of length at most ℓ, then the system has a non-negative solution with at most m nonzero entries, and where the size of each entry is O(m ℓ + m log (m)). We begin by transforming the system of linear inequalities into a linear program. Let x_1,...,x_n denote the variables appearing in the system. For each non-strict inequality constraint ∑_j a_i,j x_j ≤ b_j, one can introduce a slack variable x_n+j and define the equality constraint ∑_j a_i,j x_j + x_n+j = b_j. Similarly, for a strict inequality constraint ∑_j a_i,j x_j < b_j, one can introduce x_n+j and write ∑_j a_i,j x_j + x_n+j = b_j, adding to this the constraint that x_n+j≥ x_0. This gives rise to system of linear constraints Ax = b in the variables x_0,....,x_n+m, which can be placed in the following linear program: maximize x_0 subject to Ax = b, x≥ 0 Because the original system has a non-negative solution, the above linear program has a solution for which the objective function x_0 is positive. It is well-known (see, for example, Ch. 8 of ) that the simplex algorithm, which traverses the vertices of a convex polytope associated with the system of inequalities Ax≤b, will discover an optimal, non-negative solution x^* to the system Ax = b, and in this case it follows from optimality that x_0^* >0. Thus x_1^*,...,x_n^* provide a non-negative solution to the original system of inequalities. The simplex algorithm explores solutions to the linear program which lie at vertices by successively setting n - m variables to 0 and setting the remaining m variables so as to satisfy the linear constraints, so the solution x^* has at most m variables positive. Following the presentation in <cit.>, we now bound the size of the non-zero entries of x^*. We observe that deleting the zero entries of x^* and the corresponding rows of b and columns of A gives vectors x̅ and b̅ and a matrix A̅ such that A̅x̅ = b̅. By Cramer's rule, x̅_j = det(A̅_j)/det(A̅), where A̅_j is the result of replacing the j^th column of A̅ with b̅. It suffices to show that one can express det(A̅) using at most O(m ℓ + m log (m)) bits. Recall that det(A̅) = ∑_σ∈ S_m (-1)^N(σ) a_1 σ(1)··· a_m σ(m), where S_m is the set of permutations of [m] = {1,...,m} and N(σ) = {i, j ∈ [m] : i < j and σ(i) > σ(j)} is the set of indices inverted by σ. Each entry a_i σ(i) of A̅ has size at most ℓ. Thus each term in the above sum has size at most m ·ℓ. Noting that |S_m| = m!, relabel the terms in the above sum and define y_i such that det(A̅) = ∑_ i ∈ [m!] y_i. Group the y_i in pairs arbitrarily and sum them to produce a sequence y_i^' which is half as long: det(A̅) = ∑_ i ∈ [m! /2] y_i^'. The size of each sum y_i^' is at most one greater than that of its summands. Repeating this process log (m!) ≤log (m^m) = m ·log (m) times produces a single number of size at most m ·ℓ + m ·log (m). Let |φ| denote the number of symbols required to write φ, and let ||φ|| denote the length of the longest rational coefficient of φ, written in binary. Suppose φ∈ℒ_add is satisfiable. Then φ has a model which assigns positive probability to at most |φ| events δ∈Δ_φ, where the probability assigned to each such δ is a rational number with size O(|φ| · ||φ|| + |φ|·log(|φ|) ). Since φ is satisfiable, it has a model 𝔐 which is also a model of some disjunct ψ appearing in the disjunctive normal form of φ. Pushing all sums in ψ to one side, we observe that ψ is a conjunction of formulas of the form ∑_i𝐏(ϵ_i) ≿ b or ∑_i𝐏(ϵ_i) ≻ b. Let ψ(Δ_φ) be the result of replacing each instance of 𝐏(ϵ_i) in ψ with the sum ∑_δ∈Δ_φ δ→ϵ_i𝐏(δ) and adding the constraint ∑_δ∈Δ_φ𝐏(δ) = 1. Then a model of ψ(Δ_φ) is a model of ψ and so a model of φ, and the formula ψ(Δ_φ) simply describes a system of at most |φ| linear inequalities, so the result follows immediately from Lemma <ref>. Since ℒ_comp⊆ℒ_add, it follows that 𝖲𝖠𝖳_comp≤𝖲𝖠𝖳_add. It thus suffices to show that 𝖲𝖠𝖳_comp is 𝖭𝖯-hard and that 𝖲𝖠𝖳_add is in 𝖭𝖯. The Cook-Levin Theorem <cit.> states that satisfiability for Boolean formulas α is 𝖭𝖯-complete. This is equivalent to the problem of deciding whether 𝐏(α) ≻0 is satisfiable, which is an instance of the problem 𝖲𝖠𝖳_comp, showing that the latter is 𝖭𝖯-hard. Consider now the task of determining whether φ∈ℒ_add is satisfiable. Using Lemma <ref>, we request a small model as a certificate and confirm that it satisfies φ. § MULTIPLICATIVE SYSTEMS §.§ Polynomial probability calculus The paradigmatic multiplicative language is ℒ_poly (Definition <ref>), which adds a binary multiplication operator (denoted ·). Encompassing all comparisons between polynomial functions of probabilities, this language is sufficiently rich to express, e.g., conditional probability and (in)dependence. The system 𝖠𝖷_poly (Figure <ref>) annexes axioms capturing multiplication and its interaction with addition to 𝖠𝖷_add, and our principal result here is its completeness. 𝖠𝖷_poly is sound and complete. Soundness is again straightforward. As for completeness, we first obtain a normal form. We assume, without loss (𝖡𝗈𝗈𝗅), that φ is a conjunction of literals. Let 𝖯𝗋𝗈𝗉^φ⊂𝖯𝗋𝗈𝗉 be the finite set of letters appearing in φ, with Δ_φ the set of complete state descriptions of 𝖯𝗋𝗈𝗉^φ (see <ref>). Then: Replacement of equivalents 𝖱𝖾𝗉𝗅 (see Lemma <ref>) for ℒ_poly is derivable in 𝖠𝖷_poly. It suffices (by induction, and both versions of 𝖢𝗈𝗆𝗆) to derive 𝖺≈𝖻→𝖺 +𝖼≈𝖻 + 𝖼 and 𝖺≈𝖻→𝖺·𝖼≈𝖻·𝖼. The first follows by completeness of 𝖠𝖷_add. As for the second: by 𝖭𝗈𝗇𝖭𝗎𝗅𝗅 case on 𝖼≻0 and 𝖼≈0; use 𝖢𝖺𝗇𝖼 in the former case and 𝖹𝖾𝗋𝗈 in the latter case. For any ϵ∈σ(𝖯𝗋𝗈𝗉^φ), we have 𝖠𝖷_poly⊢𝐏(ϵ) ≈0 + ∑_δ∈Δ_φ δ→ϵ𝐏(δ). Using 𝖱𝖾𝗉𝗅 recursively and instances of 𝖠𝖽𝖽 (where ϵ stands for α and a single letter from 𝖯𝗋𝗈𝗉^φ stands for β) we can show that 𝐏(ϵ) ≈∑_δ∈Δ_φ𝐏(ϵδ). Since formulas in Δ_φ are complete state descriptions for the letters appearing in ϵ, we have propositionally (ϵδ) ↔δ if δ→ϵ, and (ϵδ) ↔ otherwise. Applying 𝖣𝗂𝗌𝗍, 𝖱𝖾𝗉𝗅, and 𝖹𝖾𝗋𝗈 (from 𝖠𝖷_add) we obtain the final result. Note that the order and associativity of the sum above generally does not matter, as can be easily seen using (additive) 𝖢𝗈𝗆𝗆 and 𝖠𝗌𝗌𝗈𝖼. The same holds for products, so we will simply omit this information where convenient. Finally, we have a normal form: Let φ be a conjunction of literals. Then there is a collection of polynomial terms {𝖺_i, 𝖻_i, 𝖺'_i', 𝖻'_i'}_1 ≤ i ≤ m 1 ≤ i' ≤ m' using only the basic probability terms {𝐏(δ)}_δ∈Δ_φ∪{0} such that 𝖠𝖷_poly⊢φ↔⋀_δ∈Δ_φ𝐏(δ) ≿0∑_δ∈Δ_φ𝐏(δ) ≈1⋀_i=1^m 𝖺_i ≿𝖻_i ⋀_i'=1^m'𝖺'_i'≻𝖻'_i'. The first two conjuncts on the right in (<ref>) are clearly derivable (with or without φ). Literals in φ are of the form 𝖺≿𝖻 or ¬ (𝖺≿𝖻), the latter being 𝖻≻𝖺; thus the literals in the formula give the terms {𝖺_i, 𝖻_i, 𝖺'_i', 𝖻'_i'}_1 ≤ i ≤ m 1 ≤ i' ≤ m' in (<ref>). By 𝖱𝖾𝗉𝗅 (Lemma <ref>) and Lemma <ref>, we may assume without loss that any of these terms uses only basic terms {𝐏(δ)}_δ∈Δ_φ∪{0}. The conjuncts in (<ref>) translate into a simultaneous system of polynomial inequalities in the indeterminates {𝐏(δ)}_δ∈Δ_φ. It is clear that φ is satisfiable iff this system has a solution, so let us apply a well-known characterization <cit.>[Note that the variant here (see ibid., Theorem 4) assumes a polynomial ring over a subfield of the real closed field.] of the (un)feasible sets of related systems: Let R = ℚ[x_1, …, x_n] and F, G, H ⊂ R be finite. Let cone(G) be the closure of G ∪{s^2 : s ∈ R} under + and × and let ideal(H) = {∑_h ∈ H a_h h : a_h ∈ R for each h }. Then either the system {f ≠ 0, g ≥ 0, h = 0 : (f, g, h) ∈ (F, G, H) } has a solution over ℝ^n, or there is a polynomial certificate of infeasibility: there are g ∈cone(G), h ∈ideal(H), n ∈ℕ such that g+h+f^2n = 0 where f = ∏_f' ∈ F f'. Note that the well-known Farkas' lemma of linear programming (obtained via Fourier-Motzkin elimination) is a special case of this more general theorem of the alternative, in which the certificate can always be taken to have a certain restricted form. The following variant is more directly applicable for our purpose: Suppose above that R = ℤ[x_1, …, x_n]. Then either the given system has a solution over ℝ^n, or there are g ∈cone(G), h ∈ideal(H), n ∈ℕ, d ∈ℤ^+ such that g + h + d f^2n = 0. In the latter alternative of Theorem <ref>, take the certificate and multiply by d, setting it to the least common denominator of coefficients in g, h. The normal form (<ref>) yields F, G, H ⊂ℤ^+[{𝐏(δ)}_δ] for Corollary <ref>. Note that by translating the conjunct ∑_δ𝐏(δ) ≈ 1 as two inequalities we may take H = ∅; the strict inequality 𝖺'_i'≻𝖻'_i' gives two inequalities a'_i' - b'_i'≥ 0 ∈ G and a'_i' - b'_i'≠ 0 ∈ F. Here a'_i', b'_i' denote the (informal) polynomials translated in the obvious fashion from the respective formal terms 𝖺'_i', 𝖻'_i'; this convention will be used hereafter. When translating a polynomial a backward to its formal equivalent denoted 𝖺, we will assume 𝖺 is a sum of so-called normal monomials. Fixing some total order ≺_Δ_φ on K = {𝐏(δ) }_δ∈Δ_φ∪{0, 1}, call a term 𝗍 over K a normal monomial if: (1) only multiplication · appears within it; (2) all multiplication is left-associative; (3) for any base terms 𝐏(δ), 𝐏(δ') in 𝗍, if δ≺_Δ_φδ', then 𝐏(δ) appears to the left of 𝐏(δ'); (4) 1, that is, 𝐏(⊤), appears exactly once as a factor in 𝗍, and is leftmost; (5) 0, that is, 𝐏(), appears once if ever, and if it appears then no other term from K appears. It is easy to see that any term has an equivalent that is a sum of normal monomials by applying 𝖣𝗂𝗌𝗍 toward fulfilling (1), 𝖠𝗌𝗌𝗈𝖼 for multiplication toward (2), 𝖢𝗈𝗆𝗆 toward (3), 𝖮𝗇𝖾 toward (4), and 𝖹𝖾𝗋𝗈 toward (5). Fixing some order on normal monomials, and applying the completeness of 𝖠𝖷_add and again 𝖹𝖾𝗋𝗈 (for 𝖠𝖷_add), we see that we can assume without loss that the sum is in said order and is left-associative, and further that the normal monomial containing 0 appears exactly once in it and appears leftmost. We call such a sum the normal monomial form of a given term. Now, suppose we have a certificate c = g + d f^2n = 0 as in (<ref>) for this system. Given a nonzero polynomial a let a^+, a^- be the unique polynomials with strictly positive coefficients such that a = a^+ - a^-; let a^+ = a^- = 0 if a = 0. Let j = d f^2n, c_+ = g^+ + j^+, and c_- = g^- + j^-. We claim that ⊢(<ref>)→𝖼_+ ≻𝖼_- 𝖼_+ ≈𝖼_-. One can in fact show ⊢𝖼_+ ≈𝖼_-: assuming normal monomial form without loss, the same normal monomial terms, with the same multiplicities, must appear as summands in 𝖼_+ and 𝖼_-, because otherwise c_+ ≠ c_- (since 0 appears exactly once in a monomial term for both sums, it cannot make the difference). We claim that ⊢(<ref>)→𝗀^+ ≽𝗀^- while ⊢(<ref>)→𝗃^+ ≻𝗃^-; again by completeness of 𝖠𝖷_add, this gives ⊢(<ref>)→𝖼_+ ≻𝖼_-. We make use of the following result. Let T = {𝗍_1, …, 𝗍_n} and T' ={𝗍'_1, …, 𝗍'_n} be sets of terms and let S = {∏_i=1^n 𝗌_i}_𝗌_1 ∈{𝗍_1, 𝗍'_1} … 𝗌_n ∈{𝗍_n, 𝗍'_n}. Let S^+ ⊂ S and S^- ⊂ S be the sets of monomial products that have an even and odd number respectively of factors from T'. Then 𝖠𝖷_poly⊢𝗍_1 ≿𝗍'_1 …𝗍_n ≿𝗍'_n →∑_𝗌∈ S^+𝗌≿∑_𝗌∈ S^-𝗌 as is the analogous rule when every ≿ is replaced by a strict ≻. Straightforward by inducting on n: apply 𝖲𝗎𝖻 and 𝖣𝗂𝗌𝗍, e.g. 𝖺·𝖼 + 𝖻·𝖽≿𝖺·𝖽 + 𝖻·𝖼𝖾≿𝖿→𝖺·𝖼·𝖾 + 𝖻·𝖽·𝖾 + 𝖺·𝖽·𝖿 + 𝖻·𝖼·𝖿≿𝖺·𝖼·𝖿 + 𝖻·𝖽·𝖿 + 𝖺·𝖽·𝖾 + 𝖻·𝖼·𝖾. To see that ⊢(<ref>)→𝗀^+ ≽𝗀^-, note that since g ∈cone(G), it is a sum of terms of the form g' = k^2 g_1 … g_l, where g_1, …, g_l ∈ G and k, g_1, …, g_l ≠ 0. It thus suffices to show ⊢(<ref>)→ (𝗀')^+ ≽ (𝗀')^- for any such term g'. Given our construction of G from (<ref>), note that the inequality 𝗀^+_i ≽𝗀^-_i appears in (<ref>) for each g_i. Referring to Lemma <ref>, note that if we define p = (t_1 - t'_1) … (t_n - t'_n) ≠ 0, then p^+ = ∑_s ∈ S^+ s and p^- = ∑_s ∈ S^- s; by casing on whether 𝗄^+ ≽𝗄^- or 𝗄^- ≽𝗄^+, we can apply the Lemma, finding in either case that ⊢(<ref>)→ (𝗀')^+ ≽ (𝗀')^-. Now, to see that ⊢(<ref>)→𝗃^+ ≻𝗃^- it suffices to show that ⊢(<ref>)→ (𝖿^2n)^+ ≻ (𝖿^2n)^-. If F = ∅ is empty, then f = ∏_f' ∈ F f' = 1 and this is simple; otherwise, for each strict inequality f ∈ F we have f = a'_i - b'_i for some constraint 𝖺'_i ≻𝖻'_i in (<ref>). If any such f = 0 then by employing normal monomial form, we find ⊢(<ref>)→; otherwise, apply the strict variant of Lemma <ref>. §.§ Comparative conditional probability Recall the language of comparative conditional probability ℒ_cond (Definition <ref>), given by φ∈ℒ_condφ = 𝐏(α|β) ≿𝐏(δ|γ) | φ | φψ for any α,β,δ, γ∈σ(𝖯𝗋𝗈𝗉). This language allows us to reason about comparisons of conditional probabilities. Reasoning about such comparisons plays an important role in a variety of settings. A salient example is probabilistic confirmation theory, where conditional probabilities are interpreted as a measure of the comparative support that some evidence confers to a hypothesis: a comparison of the form P(H_1|E_1) ≿ P(H_2|E_2) is interpreted as the statement that evidence E_2 confirms hypothesis H_2 at least as much as evidence E_1 confirms hypothesis H_1. More generally, the focus on conditional probabilities is often motivated by the view—found, for instance, in <cit.> and <cit.>—that (numerical) conditional probabilities are more fundamental than unconditional probability judgments: similarly, one may want to treat comparisons of conditional probability as more fundamental than comparisons of unconditional probabilities.[This possibility is suggested, for example, in <cit.>. See also <cit.>.] Although one may be tempted, at first sight, to view the logic of comparative conditional probability as only a minor extension of the logic of (unconditional) comparative probability axiomatized above, it is important to note that moving to ℒ_cond involves a substantial jump in expressivity. The language ℒ_cond allows to express non-trivial quadratic inequality constraints[Consider again the example mentioned in the introduction, which consists of the formula (A ∧ B)≈ (A ∧ B) ∧ (A|B) ≈ B. This holds in model (Ω,ℱ, ℙ, ·) only if ℙ( A ∧ B ) = 1- ℙ( A ∧ B ) ℙ( A ∧ B ) = ℙ( B )^2 which forces the irrational solution ℙ( B )= 1/√(2).] and, as such, it belongs more naturally to the family of multiplicative systems. As we will now discuss, this comes with a rather significant shift in the complexity of its satisfiability problem as well as its axiomatization. In Section <ref> we show that ℒ_cond and all multiplicative systems we study in this paper have a ∃ℝ-complete satisfiability problem. Before this, we discuss some challenges involved in proving a completeness theorem for ℒ_cond. These difficulties can be traced back to certain delicate questions concerning the canonical representation theorem for conditional comparative probability orders, due to <cit.>. To the best of our knowledge, Domotor's proposed axiomatization is the only known such representation result for finite probability spaces that does not depend on imposing additional richness constraints on the underlying space. (See for an overview of other approaches.) However, as we explain, an important step in Domotor's proof appears to require further justification. Moreover, from the perspective of the additive-multiplicative distinction explored in this paper, Domotor's proof strategy is not fully satisfactory, as it does little to clarify the exact algebraic content of the axioms involved. In light of these observations, we may not be able to rely on Domotor's proposed axiomatization of conditional probability orders to obtain a completeness result for ℒ_cond. The question of axiomatizing ℒ_cond is thus left open. §.§.§ Conditional probability and quadratic probability structures In order to a give a complete axiomatization for the logic of comparative conditional probability, a natural first step is to identify necessary and sufficient conditions for the representation of a comparative order between pairs of events. Suppose we represent comparative conditional judgments with a quaternary relation ≽ on a finite Boolean algebra of events ℱ. We write A|B ≽ C|D when the relation ≽ obtains for the quadruple (A,B,C,D). Such a relation is probabilistically representable if there exists a probability measure ℙ on ℱ such that for any A,B,C,D ∈ℱ, we have A|B ≽ C|D if and only if ℙ(A|B) ≥ℙ(C|D), where ℙ(X|Y) := ℙ(X∩ Y)/ℙ(Y). What properties must a quaternary order ≽ satisfy in order to be representable in this way? Throughout the literature, the answer to this question is credited to <cit.>, who proposes necessary and sufficient conditions for such a quaternary order to be representable by a probability measure. A finite qualitative conditional probability structure (FQCP), is a triple (Ω, ℱ,≽) where Ω is a finite set, ℱ a field of sets over Ω and ≽ a quaternary relation on ℱ, which additionally satisfies the following properties for all A, B, C, D∈ℱ: 𝖭𝗈𝗇𝖹𝖾𝗋𝗈. A | B≽ C | D holds only if B | Ω≻∅ | Ω and D | Ω≻∅ | Ω Accordingly, in the following, in any expression A | B≽ C | D it is assumed that we are quantifying over B, D∈ℱ_0, where ℱ_0:={X∈ℱ : X | Ω≻∅ | Ω}: 𝖳𝗈𝗍. either A | B ≽ C | D or C | D ≽ A | B for all B, D∈ℱ_0; 𝖭𝗈𝗇𝖣𝖾𝗀. Ω | Ω≻∅ | Ω ; 𝖭𝗈𝗇𝖳𝗋𝗂𝗏. B | C≽∅ | A ; 𝖨𝗇𝗍𝖾𝗋. A∩ B | B ≽ A | B; 𝖥𝗂𝗇𝖢𝖺𝗇𝖢𝗈𝗇𝖽_n. if (A_i,B_i)_i≤ n and (C_i,D_i)_i≤ n are balanced and ∀ i< n, A_i | B_i≽ C_i | D_i, then C_n | D_n ≽ A_n | B_n; 𝖬𝗎𝗅𝗍𝖢𝖺𝗇_n. for any permutation π on {1… n}: if (A_k | ⋂_0≤ i <k A_i) ≽ (B_π(k) | ⋂_0≤ i < π(k) B_i) for all 0<k≤ n, then (⋂_0< i ≤ n A_i | A_0) ≽(⋂_0 <i ≤ n B_π(i) | B_0); Moreover, in the last two schemes above, if ≻ holds for any comparison in the antecedent, then ≻ also holds in the conclusion. In the setting of conditional comparative probability, we say that the two sequences (of pairs of events) (A_i,B_i)_i≤ n and (C_i,D_i)_i≤ n are balanced if and only if ∑_i⩽ nA_i|B_i = ∑_i⩽ nC_i|D_i, where A|B:Ω→{0,1} is a partial characteristic function, given by A|B(ω) = 1 if ω∈ A∩ B; 0, if ω∈ (Ω∖ A)∩ B; undefined if ω∉B. The sum ∑_i⩽ nA_i|B_i is undefined whenever one of the terms is undefined. Domotor's proposed axiomatization of conditional comparative probability relies on an axiomatization of finite quadratic probability structures. The strategy consists in proving a representation theorem for quadratic probability structures, giving necessary and sufficient conditions for a binary relation ≽ on ℱ^2 to be representable as a product of probabilities, in the sense that there exists a probability measure such that A× B ≽ C× D if and only if (A)·(B) ≥(C)·(D). The representation theorem for conditional comparative probability is based on the representation result for quadratic probability structures: the essential idea is that one can express each comparison A | B≽ C | D as a comparison of products of the form (A∩ B) × D ≽^' (C∩ D) × B. The method for constructing of a probability measure that represents these product inequalities in the sense of (<ref>) yields a probability measure that represents the conditional probability comparisons given by ≽. It is informative to sketch the reasoning in a little more detail, in order to highlight both the differences with the representation argument for the unconditional case (ℒ_comp), as well as several points in Domotor's argument that require clarification. Moreover, the axioms that Domotor proposes for quadratic probability structures give a clearer motivation for Definition <ref>. They are given below. A finite quadratic probability structure (FQPS), is a triple (Ω, ℱ,≽) where Ω is a finite set, ℱ a field of sets over Ω and ≽ a binary relation on ℱ^2, which satisfies the following properties for all A, B, C, D∈ℱ: 𝖰1. Ω×Ω≻∅×Ω; 𝖰2. B× C≽∅× A; 𝖰3. A× B ≽ B× A; 𝖰4. A× B ≽ C× D or C× D≽ A× B; 𝖰5_n. for any permutations π and τ on {1… n}, if A_i× B_i≻∅×Ω and A_π(i)× B_τ(i)≽ A_i × B_i for all i<n, then A_n × B_n≽ A_π(n)× B_τ(n) ; 𝖰6_n. If (A_i× B_i)_i≤ n and (C_i× D_i)_i≤ n are balanced, and ∀ i< n, A_i× B_i≽ C_i× D_i, then C_n× D_n ≽ A_n× B_n; Moreover, in 𝖰5_n and 𝖰6_n, if ≻ holds for any comparison in the antecedent, then ≻ also holds in the conclusion. (A_i× B_i)_i≤ n and (C_i× D_i)_i≤ n are balanced whenever ∑_i≤ nA_i× B_i = ∑_i≤ nC_i× D_i. 𝖰1. Ω×Ω≻∅×Ω; 𝖰2. B× C≽∅× A ; 𝖰3. A× B ≽ B× A; 𝖰4. A× B ≽ C× D or C× D≽ A× B; 𝖰5_n. for any permutations π and τ on {1… n}: if A_i × B_i ≽ A_π(i)× B_τ(i) for all i<n, then A_π(n)× B_τ(n)≽ A_n × B_n ; 𝖰6_n. If (A_i× B_i)_i≤ n and (C_i× D_i)_i≤ n are balanced, and ∀ i< n, A_i× B_i≽ C_i× D_i, then C_n× D_n ≽ A_n× B_n; Moreover, in 𝖰5_n and 𝖰6_n, if ≻ holds for any comparison in the antecedent, then ≻ also holds in the conclusion. Note that the axiom scheme 𝖰6_n is a version of the finite cancellation axiom for ℒ_comp, applied to product sets of the form A× B. Moreover, under the translation (<ref>) above, the axiom scheme 𝖰6_n corresponds to the scheme 𝖥𝗂𝗇𝖢𝖺𝗇𝖢𝗈𝗇𝖽_n for conditional comparative probability. By contrast, the axiom scheme 𝖰5_n captures a version of multiplicative cancellation. Given a set of n inequalities A_i× B_i≽ C_i × D_i, say that an event is cancelled if it has the same number of occurrences on the left hand side of these inequalities (as A_i or B_i) as on the right hand side (among C_i, D_i). In the presence of the symmetry axiom 𝖰3, the axiom 𝖰5_n asserts that whenever we have sequence of n inequalities where all events are cancelled except for A and B on the left-hand side, and C and D on the right-hand side, then we can conclude A× B ≽ C× D. This is exactly what we would obtain by multiplying the probabilities of all left-hand side terms on the left, and the probabilities all right-hand side terms on the right, and then cancelling any terms occurring on each side. Under the translation (<ref>), 𝖰5_n corresponds to the axiom 𝖬𝗎𝗅𝗍𝖢𝖺𝗇_n from Definition <ref>. §.§.§ Domotor's argument The necessity of axioms (schemes) 𝖰1-𝖰6_n is easily verified. Domotor () provides an argument to the effect that these axioms are also sufficient for the order ≽ to be product-representable by a probability measure, in the sense of condition (<ref>). We will now give a brief description of the proof strategy. This will serve, first, to emphasize the algebraic nature of the problem, which involves proving the consistency of certain polynomial constraints and thus calls for techniques beyond the standard linear algebra involved in the purely additive setting of Theorem <ref>. Secondly, as we will see, there is a step in the argument which suggests that the proof of sufficiency is incomplete as it stands. Suppose we are given a finite quadratic probability structure (Ω, ℱ, ≽) as in Definition <ref>: without loss of generality, we will assume we are working with the full powerset algebra ℱ=𝒫(Ω). We want to show that the relation ≽ is product-representable in the sense of (<ref>). Consider the space ℝ^n× n containing all indicator functions A× B of products A× B. Take the space of indicator functions as vectors in ℝ^n× n, where each A× B is the vector x where x_ij=1 exactly if (ω_i, ω_j)∈ A× B (that is, we list all elements of Ω×Ω in lexicographic order). Lift the order ≽ from ℱ to M={A× B | A× B∈ℱ}. Now we apply Lemma <ref>: note that axioms 𝖰4 and 𝖰6_n give us precisely conditions (a) and (b) in the statement of the Lemma. This gives us the existence of a linear functional Φ̂: ℝ^n× n→ℝ with Φ̂(A× B) ≥Φ̂(C× D) whenever A× B≽ C× D. This functional is of the form Φ̂(x)= 𝐚^Tx= (a_11, a_12,…, a_1n, a_21,… a_nn) [ x_11; x_21; ⋮; x_nn ] The existence of such a linear functional already entails that there is a measure μ on 𝒫(Ω×Ω) that respects the ordering on Cartesian products ≽, by defining μ(A× B):= Φ̂(A× B)/Φ̂(Ω×Ω). But we need to ensure that there a measure μ representing ≽ which corresponds to the product of a measure on 𝒫(Ω): i.e. such that μ(A× B)= (A)·(B). We represent the linear functional Φ̂ as a bilinear functional Φ:ℝ^n×ℝ^n→ℝ. It can be represented as a matrix Φ(𝐱,𝐲)= 𝐱^T𝐌_Φ𝐲 where 𝐌_Φ is a n× n matrix: intuitively, we want the (i,j)-th entry to represent the probability of {ω_i}×{ω_j}. We write is as Φ(𝐱,𝐲)= 𝐱^T𝐌_Φ𝐲 = (x_1,…,x_n)[ a_11 … a_1n; ⋮ ⋮; a_n1 a_nn ][ y_1; y_2; ⋮; y_n ] We then have Φ(A, B)= Φ̂(A× B). Now, in order for Φ to give rise a product measure as required, we want to know whether it can be decomposed as the product of linear functionals f_1 and f_2, so that we can write Φ(A,B)=f_1(A)f_2(B). This would suffice, as the symmetry axiom 𝖰3 ensures that Φ(A, B)=Φ(B, A) (𝐌_Φ is symmetric), which would ensure that f_1=f_2. Then setting (A) := f_1(A) / f_1(Ω) would give the desired measure (here 𝖰1 and 𝖰2 ensure it is a non-degenerate measure). A general condition for a bilinear functional being decomposable in this way given by the following standard characterization:[ A n× n matrix 𝐌 has rank 1 if and only if it can be written in the form 𝐌= 𝐮𝐯^T for two vectors 𝐮, 𝐯. In one direction, suppose 𝐌_Ψ has rank one, and so is of the form 𝐮𝐯^T. Then Ψ( x,y) = x^T(uv^T)y = (x^Tu)(v^Ty) = (u^Tx)(v^Ty), so that we define f_1(x):=u^Tx and f_2(x):=v^Tx for the decomposition into linear functionals. Conversely, given two linear functionals generated by respective vectors u and v in the same fashion, the matrix uv^T generates a decomposable bilinear functional.] A bilinear functional Ψ: ℝ^n×ℝ^n→ℝ can be written as Ψ(𝐱,𝐲)= f_1(𝐱)f_2(𝐲) for two linear functionals f_1, f_2:ℝ^n→ℝ if and only if rank(𝐌_Ψ)=1. We then say Ψ is a rank-1 functional. This is where Domotor's argument seems to face a difficulty (or perhaps an omission in the presentation). The proof began by establishing the existence of the bilinear function Φ which represents the order ≽. At this point the proof proceeds to argue that Φ—a generic order-preserving linear functional obtained from Lemma <ref>—has rank 1.[See <cit.>.] Thus the strategy seems to be to argue that any such functional representing ≽ has rank 1. This, however, is not the case, as can be seen by a simple example. Say that two functionals Φ and Ψ are order-equivalent on 𝒫(Ω)×𝒫(Ω) if we have ≽_Φ=≽_Ψ, where we define A× B≽_Φ B× C if and only if Φ(A,B)≥Φ(C, D). We can have two such functionals that are order-equivalent with only one of them having rank 1. For instance, in the case Ω={ω_1, ω_2}, we can take functionals given by matrices 𝐌_Φ = [ 1/16 3/16; 3/16 9/16 ] and 𝐌_Ψ = [ 1/12 3/12; 3/12 5/12 ] Here Φ is a rank-1 functional, whose order ≽_Φ is representable by the probability measure ({ω_1})=1/4, ({ω_2})=3/4. However, Ψ is evidently not rank-1. Yet the order ≽_Ψ agrees with ≽_Φ, and thus satisfies axioms 𝖰1-𝖰6_n. The axioms 𝖰1-𝖰6_n thus cannot guarantee in general that any linear functional that represents the order has rank 1. Clarifying this step of the argument (and indeed, determining if this difficulty may be due to a presentational ambiguity, rather than a logical gap) is complicated by the fact that the author does not spell out the decomposability argument in detail. Instead, the argument very briefly appeals to an unstated result in geometry of webs <cit.>, from which, given the multiplicative cancellation axiom 𝖰5_n, decomposability is inferred.[Unfortunately, we were unable to trace the original article by Aczél, Pickert and Radó.] Returning to the task at hand: we know that there exists a bilinear functional Φ which represents the order ≽. In order to ensure the order is representable via products of probabilities, we want to show that the axioms guarantee the existence of a bilinear functional of rank 1 that is order-equivalent to Φ. §.§.§ The (semi-)algebraic perspective on the representation problem Whether or not the approach via web geometry can be made to succeed in showing the sufficienty of Domotor's axioms, there is a sense in which it not the most natural from the perspective of investigating the additive-multiplicative divide in probabilistic logics. Recall that Scott's representation theorem (Theorem <ref>), and thus the standard completeness proof for ℒ_comp, directly relates the finite cancellation axioms to certificates of inconsistency for linear inequality systems. In the same way, it is of intrinsic interest to pursue a representation theorem for conditional comparative probability that would directly reveal the algebraic content of Domotor's proposed axioms (and axiom 𝖰5_n in particular, which captures the multiplicative behaviour of conditional probability orders). It is thus worth considering a direct algebraic formulation of the problem. Each order ≽ satisfying the axioms 𝖰1-𝖰6_n generates a system of polynomial inequalities. For every set A⊂Ω, the polynomial corresponding to A is given by p_A(x_1,...,x_n) = ∑^n_i=1A(ω_i) x_i. Now for each inequality A× B ≽ C× D in the order, we add a constraint p_A(x)p_B(x)- p_C(x)p_D(x)≥ 0, and similarly for strict inequalities. We need to show the resulting system is consistent whenever ≽ satisfies the axioms 𝖰1-𝖰6_n. Note that this system of inequalities is given by quadratic forms: each polynomial p_A(x)p_B(x)- p_C(x)p_D(x) is homogeneous of degree 2. Formulated in this way, it is clear that proving a representation result would amount to showing that the axioms ensure that the semi-algebraic sets defined by quadratic forms of this type are indeed nonempty. By analogy to the case of ℒ_comp, here it is natural to approach this problem via the Positivstellsatz (Theorem <ref>). As we mentioned above, the Positivstellensatz is a semi-algebraic analogue of hyperplane separation theorems, like those used in proving completeness for ℒ_comp. It establishes that there exist certificates of infeasibility of a given form for any infeasible system of polynomials. Just as, in the additive case, any certificate of infeasibility was shown to correspond to a failure of a finite cancellation axiom, so too, one may hope, a failure of the Domotor axioms (and the 𝖰5_n axiom in particular) can always be extracted from a Positivstellensatz certificate. We leave a definite solution to the representation problem for conditional probability—and completeness for ℒ_cond—for further work: as a first step, we show in the Appendix that the axioms for quadratic probability structures are indeed sufficient for representation in the special case of |Ω|=2. The observations in this section motivate further work on determining the adequacy of Domotor's axioms: this is not only to ensure that the canonical axiomatization for conditional probability orders on finite spaces is correct as it stands, but also because, as we suggested above, it is of intrinsic interest to pursue an alternative, algebraically transparent proof of the representation result. §.§.§ An alternative axiomatization Consider the system 𝖠𝖷_cond consisting of 𝖠𝖷_comp for each condition, plus the following axioms. §.§.§ Comparative binary products This should be axiomatized by 𝖠𝖷_quad below. §.§.§ Additive binary products Consider the language ℒ^2_add comparing sums of binary products, i.e., consisting of all formulas of the form ∑_i 𝐏(α_i)·𝐏(α'_i) ≿∑_j 𝐏(β_j)·𝐏(β'_j) | φ | φψ for any α_i, α'_i, β_j, β'_j ∈σ(𝖯𝗋𝗈𝗉). We write 𝐏(α) to abbreviate the term 𝐏(α)·1, and likewise for 0, 1. Consider the system 𝖠𝖷_add^2 consisting of 𝖠𝖷_add (schematic variables ranging over binary products) plus the following axioms 𝖠𝖷_add^2 is sound and complete. Soundness is straightforward. As for completeness, suppose φ is valid, or ¬φ is unsatisfiable. By completeness above in the polynomial system we have some p-satz certificate. We need to show that we can introduce assumptions for 𝖯𝗈𝗅𝖺𝗋𝗂𝗓𝖾 to reduce higher powers to quadratics so that the contradiction can be shown purely in 𝖠𝖷^2_add. Should we not be able to eliminate addition as well, again by the original polarization rule? §.§ Complexity of multiplicative systems The main result of this section finds a uniform complexity for all of our multiplicative systems: 𝖲𝖠𝖳_ind, 𝖲𝖠𝖳_confirm, 𝖲𝖠𝖳_cond, and 𝖲𝖠𝖳_poly are all ∃ℝ-complete. To show the above theorem, we borrow the following lemma from <cit.>: Fix variables x_1,...,x_n, and set of equations of the form x_i + x_j = x_k or x_i x_j = 1, for i,j,k ∈ [n]. Let ∃ℝ-inverse be the problem of deciding whether there exist reals x_1,...,x_n satisfying the equations, subject to the restrictions x_i ∈ [1/2,2]. This problem is ∃ℝ-complete. We omit the proof of Lemma <ref>, but at a high level, it proceeds in two steps. First, one shows that finding a real root of a degree-4 polynomial with rational coefficients is ∃ℝ-complete, and then repeatedly performs variable substitutions to get the constraints x_i + x_j = x_k and x_i x_j = 1. Second, one shows that any such polynomial has a root within a closed ball about the origin, and then shifts and scales this ball to contain exactly the range [1/2,2]. Since 𝖲𝖠𝖳_ind≤𝖲𝖠𝖳_confirm≤𝖲𝖠𝖳_cond≤𝖲𝖠𝖳_poly, it suffices to show that 𝖲𝖠𝖳_ind is ∃ℝ-hard and that 𝖲𝖠𝖳_poly is in ∃ℝ. To show the former, we extend an argument given by <cit.>, and to show the latter, we repeat the proof given by <cit.> (see also <cit.>). Let us first show that 𝖲𝖠𝖳_poly is in ∃ℝ. Suppose that φ∈ℒ_poly is satisfied by some model ℙ. Using the fact that ∃ℝ is closed under 𝖭𝖯-reductions (<cit.>; cf. Definition <ref>) it suffices to provide an -reduction of φ to a formula ψ∈𝖤𝖳𝖱. Let E contain all ϵ such that 𝐏(ϵ) appears in φ. Then consider the system of equations ∑_δ∈Δ_φ𝐏(δ) = 1 ∑_δ∈Δ_φ δϵ𝐏(δ) = ℙ(ϵ) for ϵ∈ E. When plugged in for ℙ, the measure ℙ satisfies the above system, so by Lemma <ref>, this system is satisfied by some model ℙ_small assigning positive (small-sized) probability to a subset Δ_small⊆Δ_φ of size at most |E| ≤ |φ|. Let the set Δ_small and the model ℙ_small be the certificate of the -reduction.[Equivalently, imagine the reduction proceeding will all possible choices of Δ_small and ℙ_small. We will show that some such choice produces a satisfiable formula ψ if and only if φ is satisfiable.] The reduction proceeds by replacing each ϵ∈ E in φ with the δ∈Δ_small which imply it, and then checking whether ℙ_small is a model of the resulting formula ψ. (The size constraint on E ensures that ψ can be formed in polynomial time.) If φ is satisfiable, Δ_small and ℙ_small exist, so the reduction of φ successfully produces a satisfiable formula ψ. Conversely, the success of the reduction with the witnesses Δ_small and ℙ_small ensures the satisfiability of φ, since a model of ψ is a model of φ. Let us now show that 𝖲𝖠𝖳_ind is -hard. To do this, consider an ∃ℝ-inverse problem instance φ with variables x_1,...,x_n. It suffices to find a polynomial-time deterministic reduction to a _ind instance ψ. We first describe the reduction and then show that it preserves and reflects satisfiability. Corresponding to the variables x_1,...,x_n, define fresh events δ_1,...,δ_n∈σ(Prop). Define fresh, disjoint events δ_1^',...,δ_n^'. Let ψ be the conjunction of the constraints 1/n≿𝐏(δ_i) ≿1/4n for i=1,...,n 𝐏(δ_i) P(δ_j) 𝐏(δ_i δ_j) = 1/4n^2 for x_i · x_j = 1 in φ P(δ_i^') = P(δ_i)P(δ_j^') = P(δ_j)P(δ_i^'δ_j^') = 𝐏(δ_k) for x_i + x_j = x_k in φ. The formula ψ is not yet a formula in ℒ_ind, since it features constraints of the form 𝐏(α) ≿ 1/N and 1/N ≿ 𝐏(α). For any constraint of the form 𝐏(α) ≿ 1/N, replace 1/N with 𝐏(ϵ_N), replace α with αϵ_N, and require that the fresh events ϵ_1,...,ϵ_N are disjoint with P(_i ϵ_i)=1 and P(ϵ_i) = P(ϵ_j) for i =1,...,N. Similarly, for any constraint of the form 1/N ≿ 𝐏(α), replace 1/N with 𝐏(ϵ_N^'), replace α with αϵ_N^', and require that the fresh events ϵ_1^',...,ϵ_N^' are disjoint with P(_i ϵ_i^')=1 and P(ϵ_i^') = P(ϵ_j^') for i =1,...,N. This completes our description of the reduction. The map x_i ↦ x_i/2n sends satisfying solutions of φ to those of ψ, and the inverse map ℙ(δ_i) ↦ℙ(δ_i) · 2n sends satisfying solutions of ψ to those of φ. Further, the operations performed are simple, and the introduced events δ_i, δ_i^', ϵ_i and the constraints containing them are short, so the reduction is polynomial-time. Some authors have suggested that probability logic with conditional independence terms may be a useful compromise between additive languages built over linear inequalities and the evidently more complex polynomial languages (see, e.g., ). However, the above result shows that even allowing simple independence statements among events (not to mention conditional independence statements among sets of variables) results in ∃ℝ-hardness. Thus while probability logic with conditional independence seems on its face to offer a compromise, it in fact introduces (at least) the complexity of the maximally algebraically expressive languages considered in this paper. The above result shows that reasoning in the seemingly simpler systems ℒ_ind and ℒ_confirm are just as complex as ℒ_poly, because the former systems allow for the expression of independence statements. We conclude with two observations relating to the above result. First, a minimal extension of ℒ_comp, discussed by <cit.>, remains 𝖭𝖯-complete, even though it includes some mention of conditional probability: Fix a nonempty set of proposition letters 𝖯𝗋𝗈𝗉. The language ℒ_same cond is defined: φ∈ℒ_same cond φ = 𝐏(α|β) ≿𝐏(α^'|β) | φ | φψ for any α, α^', β∈σ(𝖯𝗋𝗈𝗉) 𝖲𝖠𝖳_same cond is 𝖭𝖯-complete. Hardness follows immediately from Theorem <ref> and the observation that ℒ_same cond is at least as expressive as ℒ_comp. To show completeness, take any φ∈ℒ_same cond, and let ψ be the result of replacing each term 𝐏(α | γ) in φ with 𝐏(αγ). We claim that φ and ψ are equisatisfiable. Indeed, for any measure ℙ, we have: ℙ(αγ) ≥ℙ(βγ) ℙ(α | γ) ≥ℙ(β| γ). Thus the inequalities mentioned in φ hold precisely when those mentioned in ψ hold. Second, whereas the above theorem characterizes the complexity of reasoning about independence among events, the following result due to <cit.> characterizes the complexity of reasoning about independence among two random variables: Determining independence of two random variables is complete for the complexity class 𝖼𝗈𝖣𝖯, the complement of 𝖣𝖯, the class of all languages ℒ such that ℒ = ℒ_1 ∩ℒ_2, where ℒ_1 is in 𝖭𝖯 and ℒ_2 is in 𝖼𝗈𝖭𝖯 (the complement of 𝖭𝖯). <cit.> also characterize the complexity of several other tasks concerning the (conditional) independence of sets of random variables. § SUMMARY: THE ADDITIVE-MULTIPLICATIVE DIVIDE We identified an important dividing line in the space of probability logics, based on the distinction between purely additive and multiplicative systems. Additive systems can encode reasoning about systems of linear inequalities; multiplicative systems can encode reasoning about polynomial inequality systems that include at least quadratic constraints. The distinction between additive and multiplicative systems robustly tracks a difference in computational complexity: while the former are 𝖭𝖯-complete, the latter are ∃ℝ-complete. As a consequence, inference involving (implicitly) multiplicative notions is inherently more complex (assuming 𝖭𝖯≠∃ℝ): this applies to various intuitively `qualitative' systems for reasoning about independence (ℒ_ind), confirmation (ℒ_confirm), or comparisons of conditional probability (ℒ_cond). As we saw, proving completeness for the additive and multiplicative systems involves different methods. While completeness for additive systems relies on linear algebra (and hyperplane separation theorems or variable elimination methods), the natural mathematical setting for multiplicative systems is semialgebraic geometry (and completeness relies on versions of the real Positivstellensatz). Importantly, both in the additive and multiplicative settings, systems with explicitly `numerical' operations are more expressive and admit finite axiomatizations, while incurring no cost in complexity. By contrast, even the most paradigmatically `qualitative' logic of unconditional comparative probability (ℒ_comp) is not finitely axiomatizable.[A similar phenomenon occurs in other non-numerical systems for probabilistic reasoning. See, for example, results on the non-finite axiomatizability of conditional independence for discrete random variables <cit.> and for Gaussian random variables <cit.>.] Thus, from a logical perspective, there is little to be gained from restricting attention, in applications of probability logics, to syntactically `qualitative' systems without arithmetical operations. These results also illustrate how ease of elicitation and complexity of inference might come apart. As we noted, the use of comparative probability is sometimes motivated by the view that `qualitative' judgments are more intuitive, or easier to elicit, than explicitly `quantitative' ones. While these claims are somewhat difficult to substantiate (see the discussion below in Section <ref>), our results give a concrete sense in which intuitions about the ease of elicitation of certain comparative judgments do not reflect the complexity of inference involving these judgments. Consider, for example, the case of ℒ_cond and ℒ_add. While at first blush there may be something more immediate about comparisons of conditional probabilities that are not explicitly numerical, as opposed to comparisons expressible in ℒ_add (`is A twice as likely as B?'), our results suggest that reasoning with the former is more complex than reasoning with the latter. While the distinction between additive and multiplicative systems is an informative dividing line that is useful in classifying the landscape of probability logics, investigating this divide also illustrates that the very distinction between qualitative and quantitative reasoning remains somewhat elusive. Certainly, prima facie natural ways to formulate the distinction, based on the simplicity and intuitiveness of comparative judgments, or on the explicit presence of arithmetical operators, do not seem to capture any clear or robust distinction that tracks properties of logical interest such as complexity, expressivity, and axiomatizability. We are thus left with the question of how to give concrete substance to the often invoked, but never delineated, qualitative/quantitative distinction: are there any structural properties of inference that are characteristic of qualitative reasoning in probabilistic contexts? We turn to this question next. § WHAT IS THE QUANTITATIVE-QUALITATIVE DISTINCTION? Before concluding we briefly consider the larger conceptual questions with which we began. How might we understand the prevalent distinction between quantitative and qualitative formulations of probabilistic principles and inference patterns, particularly in light of the landscape of systems we have explored in the present work? We begin by entertaining several suggestions from the literature. §.§ Previous Suggestions As briefly discussed in the introduction, gestures toward a distinction between qualitative and quantitative formulations of probability can be found throughout the literature, going back at least as far as <cit.>. In the seminal work by de Finetti on comparative probability, the express goal was to `start out with only qualitative notions' before `one arrives at a quantitative measure of probability' <cit.>. However, while the distinction often arises as informal motivation, there has been less explicit discussion of what exactly the distinction might be. A survey of the literature reveals two families of proposals. Syntactic proposals locate the distinction in formal syntax, whereas semantic proposals focus instead on the variety of models a system admits. §.§.§ Syntactic Proposals The passage quoted above from <cit.> invites a view on which the distinction tracks whether numbers, or more generally arithmetical concepts, appear explicitly in our formal language. Comparative probability languages like ℒ_comp and perhaps also ℒ_cond, on this view, are typically qualitative, since there are no numerical terms or operations. In a recent paper, <cit.> state this view clearly: `What distinguishes qualitative from quantitative probability (truth valued) logics is that qualitative probability logics do not employ quantities or arithmetic operations in the syntax, and the informal reading of the qualitative probability formulas do not require a quantitative interpretation.' On this picture, comparative judgments do not involve any explicit reference to numbers, so such systems would count as qualitative. By contrast, a language like ℒ_add would presumably be considered quantitative, since it involves an explicit addition-like operator.[Notably, the system introduced by <cit.> also employs an `addition-like' operator, allowing for a simple finite axiomatization of what is essentially our language ℒ_add. The target in this paper rather appears to be the extension from <cit.> that includes explicit constant terms for integers.] While there may be extra-logical reasons to focus attention on qualitative systems in this sense, such restrictions come at a logical price. At no increase in reasoning complexity, the presence of arithmetical functions affords simple finite axiomatizations, as well as greater expressivity. We return to potential extra-logical, viz. empirical, motivations below in <ref>. §.§.§ Semantic Proposals An alternative way of drawing the distinction appeals not to the syntax, but to the semantics of the system. The quotation from <cit.> also gestures at such a view, according to which qualitative systems do not require a quantitative interpretation. The suggestion seems to be that qualitative systems are sufficiently general that they also admit interpretations that do not involve numbers in any explicit way. All of the systems we studied here are interpreted relative to a probability space (Ω,ℱ,ℙ). But some of them also admit alternative interpretations. Take, for instance, ℒ_add. Consider any totally ordered commutative monoid (M;⊕,⊒) that also satisfies the following two conditions: * Double Cancellation: if a ⊕ e ⊒ c⊕ f and b ⊕ f ⊒ d ⊕ e, then a⊕ b ⊒ c ⊕ d; * Contravenience: if a ⊕ b ⊒ c ⊕ d and d ⊒ b, then a ⊒ c. Then our earlier results imply: Consider any mapping · from σ(𝖯𝗋𝗈𝗉) to a totally ordered commutative monoid (M;⊕,⊒). If · also satisfies the following three conditions: * Monotonicity: α⊒β, whenever β→α, * Non-triviality: ⋣⊤, and * Additivity: α∨β = α⊕β, whenever β→α then we will obtain completeness for the system 𝖠𝖷_add (and hence also 𝖠𝖷_comp). The assumptions above are enough to guarantee soundness of 𝖠𝖷_base, as well as axioms 𝖠𝖽𝖽, 2𝖢𝖺𝗇𝖼 and 𝖢𝗈𝗇𝗍𝗋. The remaining three axioms of 𝖠𝖷_add follow from the fact that M is a totally ordered commutative monoid. Completeness will follow immediately, since we always have a countermodel in the non-negative rationals (ℚ^+;+,≥), by Theorem <ref>. The same obviously applies to the smaller system, 𝖠𝖷_comp.[This observation mirrors the classic result of <cit.>, showing that every two (totally) ordered Abelian groups have the same existential (or universal) first-order theory.] However, there are also other commutative monoids that do not explicitly involve numbers but still satisfy Double Cancellation and Contravenience. For example, we could take M = {a}^* to be the set of all finite strings over a unary alphabet (the `free monoid' over {a}), with ⊕ string concatenation and ⊒ the relation of string containment. Since x ↦ kx for any positive integer k is an embedding of (ℚ^+;+,≥) to itself, we can always find countermodels in (ℕ;+,≥), which is in turn isomorphic to ({a}^*;⊕,⊒), so we have: 𝖠𝖷_add (and 𝖠𝖷_comp) is complete with respect to interpretations in ({a}^*;⊕,⊒). On this alternative interpretation, the `probability' of an event is taken to be simply a string in unary. To the extent that the multiplicative systems do not enjoy such alternative interpretations, our division appears to harmonize with this distinction. We leave as an open question whether systems like 𝖠𝖷_poly can be interpreted in models that are not (isomorphic to some sub-semiring of) the real numbers or the unit interval. A more radical way of drawing the distinction is to insist not that a system possess non-numerical models in order to be qualitative, but that the system have no straightforward models that are numerical. For instance, in the literature on uncertain reasoning, systems that license inferences like A,B |∼ A ∧ B have been deemed qualitative, since they correspond to intuitive, non-numerical patterns, but are incompatible with straightforward probabilistic interpretations (e.g., according to which A is accepted just in case ℙ(A)>θ for some threshold θ). For instance, <cit.> give voice to this perspective when they write: `Broadly speaking, there are two ways of approaching the formal analysis of uncertain reasoning: quantitatively, using in particular probability relationships, or by means of qualitative criteria. As is widely recognized, the consequence relations that are generated in these two ways behave quite differently.' Systems of non-monotonic reasoning can often be given quantitative probabilistic interpretations (see, e.g., ), and there are various ways of ameliorating the inferential tensions in these contexts <cit.>. But such resolution usually comes at the expense of quantitative granularity typical of numerical probabilistic reasoning. In any case, on this way of drawing the distinction, none of the systems we have studied in the present work would count as qualitative, even the basic comparative system 𝖠𝖷_comp. §.§ Empirical Issues Some of the motivations for less quantitative formulations of probability come not from issues of logic and complexity, but rather from empirical concerns. A common thought is that eliciting comparative judgments may be in some way more tractable. Relatedly, numerous authors have suggested that quantitative judgments may not always be empirically meaningful in the same way that qualitative judgments may be. The intuition behind this motivation is clear enough. Comparative judgments introspectively appear easier to make than numerical comparisons, and echoing the earlier suggestions by Keynes, Koopman and others, some more recent authors have concluded that they are in some sense more psychologically `real' (e.g., ). Indeed, it has long been appreciated that binary comparative judgments in general can be more stable or reliable than absolute judgments, even to the extent that some recent researchers advocate replacing the latter with the former to mitigate noise in judgment <cit.>. Such judgments play a central role in algorithms for probabilistic inference as well, under the assumption that probabilistic comparisons—especially between hypothesis that are `nearby' in a larger state space—are relatively easy <cit.>. Similar arguments about the ease and naturalness of qualitative judgments have been marshalled for conditional independence relationships, purportedly arising from even more fundamental qualitative causal intuitions <cit.>. §.§.§ Direct Measurements of Probability For the specific problem of eliciting subjective probabilities, not only is stability across time and contexts important—it is also significant that the whole pattern of attitudes be consistent with at least one probabilistic representation in the first place. It was pointed out already by <cit.> that verifying this in the purely comparative setting, even for a small number of basic events, involves a combinatorial explosion of pairs to check. Worse yet, we now have overwhelming evidence (e.g., from the long line of work starting with ) that ordinary judgments about comparative probability routinely violate even the most basic axioms, such as 𝖣𝗂𝗌𝗍. Similar experimental patterns confront the logic of (conditional) independence (e.g., ). These considerations, at the very least, put pressure on any claim to the effect that qualitative judgments enjoy special empirical tractability. Once we abandon the ambition of eliciting coherent, fully specified patterns of judgments, the numerical/non-numerical distinction again begins to appear somewhat arbitrary. Contemporary methods for probability elicitation tend to be partial, and they handle numerical judgments in a similar way to their treatment of purely comparative judgments (e.g., ). In the behavioral sciences, probabilities are commonly measured on continuous sliding scales, or on 7-point Likert scales (e.g., `extremely unlikely' to `extremely likely'), often with a background assumption that such responses will be noisy reflections of an underlying psychological mechanism, estimable from many samples of the population (see, e.g., for discussion). From this perspective, comparative judgments may tend to be robust because they are often relatively insensitive to random perturbations. At the same time, we might expect a claim that `A and B are equally likely' to be more fickle than a claim like, `C is more than twice as likely as D'. So there again may be nothing distinctive about non-numerical comparisons in this regard. §.§.§ Indirect Measurement via Preference A prominent way of thinking about subjective probability takes it to be not a primitive notion, but rather derivative from an agent's preferences over `gambles' or `acts'. On this alternative view such preferences, as revealed in choice behavior, form the empirical basis of probability attributions. As long as the pattern of choices satisfies certain sets of axioms, a representation in terms of (fully quantitative, and typically unique) probabilities together with utilities is guaranteed (e.g., as in Savage's classic axiomatization). These axioms tend to be quite strong, and there have been countless criticisms of them, on both descriptive and normative grounds. Though interesting variants and weakenings have been proposed (e.g., ), it appears that the assumptions required will be at least as demanding as those made in the purely probabilistic case. Indeed, in the preferential setting there is a precise sense in which comparative probability judgments emerge as a special case of uncertain gambles. We would say that an agent considers α more likely than β if they prefer a gamble that returns a good outcome in case of α to one that returns the same good outcome in case of β. Axioms can be stated to guarantee that the resulting order will be probabilistically representable (see ), though, yet again, even the most basic of these have been questioned. For instance, in Ellsberg's celebrated counterexample to Savage's sure-thing principle, people tend to prefer a gamble on A to one on B, while simultaneously preferring a gamble on B ∨ C to one on A ∨ C, where C is incompatible with A and with B. This leads to a blatant violation of quasi-additivity (𝖰𝗎𝖺𝗌𝗂'). Nonetheless, if one is willing to weaken the axioms required to guarantee probabilistic representability (or simply disregard their systematic violation), these basic gambles can be elaborated to extract probability judgments with greater numerical content, including ratio comparisons (typical of ℒ_add and beyond). The basic idea, following <cit.> (see also , inter alia), is to begin with a way of eliciting meaningful utilities for outcomes, and then to use these utilities to measure probability judgments. For instance, if it can be determined that an agent judges outcome O_2 be to at least twice as desirable as O_1, then someone who prefers a gamble returning O_1 if α, to one returning O_2 if β, might be taken to judge α more than twice as likely as β. Could such methods be employed to determine not just additive constraints on probabilities, but also multiplicative constraints? Supposing we could establish that the utility of O_2 were greater than that of O_1 squared, for instance, we might conclude that α has probability greater than the square of β's probability. The problem with this suggestion is that most approaches to utility, including those that descend from <cit.>, assume that utilities are meaningful only up to linear transformation, so such conclusions would not be well defined. Thus, even in this context of empirical questions, we see that the natural dividing line may not be numerical or quantitative constraints per se, but rather additive versus multiplicative constraints. § CONCLUSION AND OPEN QUESTIONS Through a broad distinction between additive and multiplicative formalisms for probabilistic reasoning, we have explored the landscape of probability logics with respect to fundamental questions about expressivity, computational complexity, and axiomatization. What emerges is a remarkably robust divide that cross-cuts some tempting ways of dividing the space based on intuitions about quantitative versus qualitative representation and reasoning. In addition to the technical contributions summarized above in <ref>, we also canvassed some of the empirical considerations that have motivated special attention to comparative and other `purely qualitative' judgments. At present it is not clear that the latter enjoy any distinctive empirical status, and if anything, the relevant empirical boundaries may better track the additive-multiplicative distinction that has been our focus here. It is certainly not an aim of this paper to discourage the use and exploration of qualitative probability. On the contrary, as we have discussed, such systems present rich opportunities for systematic logical investigation. Moreover, a number of recent authors have found such languages useful for stating and evaluating epistemological principles in a general and relatively neutral manner (e.g., , inter multa et alia). This is especially evident in settings where agreement with a probability measure is not assumed, or is even precluded (e.g., ). These considerations are rather different from many of the original motivations that prompted study of such systems, viz. simplicity and empirical tractability. From a logical perspective—concerning complexity, axiomatizability, and expressivity—and potentially also from an empirical perspective, prohibition on the use of simple numerical primitives appears Procrustean. A number of significant technical questions and directions on this subject merit further exploration. In addition to the outstanding issue of substantiating Domotor's argument for representation of comparative conditional probability (represented here as ℒ_cond), we also mention the following directions: * Many authors have argued that the comparability of ≿ should be rejected, on both normative and empirical grounds <cit.>. All of the systems presented in this paper could alternatively be interpreted relative to sets of probability measures. Logical systems for such settings have received considerable investigation (see, e.g., for an overview). It seems plausible that many of the results here would extend with little change; however, our proof methods often employed normal forms whose exact character depended on comparability. Working through this setting in detail would be worthwhile. * It is easy to imagine natural extensions of the languages considered here, including varieties of explicit quantification. For instance, <cit.> studies probability logics extending (what we here called) ℒ_add with quantification over events, while <cit.> explore a range of first order extensions of ℒ_add and (what we called) ℒ_poly with quantification over both field terms and (a second sort of) objects. Generally speaking, these extensions result in a significant complexity increases, leading to undecidability (in the best cases), Π^1_1-hardness, and even Π^1_∞-hardness. However, there are ways of curtailing this complexity (e.g., by restricting to bounded finite domains), and it could be enlightening to investigate our more refined space of languages in those settings. * A potentially more modest, but quite useful extension to any of these systems would be to add exponentiation, e.g., the function e^x. As such systems would also encode logarithms, this would allow reasoning about (conditional) entropy as well. It is unclear at present whether adding exponentiation to ℒ_poly would even result in a decidable system. By a theorem of <cit.>, a positive answer to this question would also settle Tarski's well known `exponential function problem' which has been open since the 1940s. Whether the problem may be easier for some weaker systems we have considered here remains to be seen. Moreover, it may be feasible to address questions of axiomatizability without solving that outstanding open problem. These are just some of the questions that will need to be answered before we have a fully comprehensive understanding of the space of natural probabilistic languages. apalike § APPENDIX: FINITE CANCELLATION AND 𝖠𝖷_ADD To help clarify the relationship between 𝖠𝖷_add and the standard axiomatization of comparative probability, one can verify the soundness of the finite cancellation scheme by deriving it in our additive system: Each instance of 𝖥𝗂𝗇𝖢𝖺𝗇_n is derivable in 𝖠𝖷_add. We use all of the axioms of 𝖠𝖷_add, and we appeal to 𝖠𝗌𝗌𝗈𝖼, 𝖢𝗈𝗆𝗆, and the rules and axioms of 𝖠𝖷_base (such as Boolean reasoning) without mention. At a high level, the idea of the proof is simply to combine all the inequalities together (using 2𝖢𝖺𝗇𝖼) and then cancel a number of terms from both side (using 𝖢𝗈𝗇𝗍𝗋). The first observation is that (α_1,…,α_n)≡_0(β_1,…,β_n) implies that 𝐏(δ)≈0 for every unbalanced state description δ over these 2n formulas. By 𝖣𝗂𝗌𝗍 and 𝖠𝖽𝖽, (α_1,…,α_n)≡_0(β_1,…,β_n) allows us to obtain ∑_δ∈ℬ𝐏(δ) ≈∑_δ∈Δ𝐏(δ). By multiple applications of 𝖢𝗈𝗇𝗍𝗋—and in particular its consequence, 1𝖢𝖺𝗇𝖼—we can cancel all of the summands from the left side, ending (by another application of 𝖢𝗈𝗇𝗍𝗋) with 0≈∑_δ∈Δ-ℬ𝐏(δ). By yet further applications of 𝖢𝗈𝗇𝗍𝗋 we conclude that 𝐏(δ)≈0 for each unbalanced δ∈Δ-ℬ. Let Δ_α_i be those state descriptions with α_i occurring positively (i.e., not negated), and likewise for Δ_β_i. By 𝖠𝖽𝖽 we know that 𝐏(α_i) ≈∑_δ∈Δ_α_i𝐏(δ), and likewise for each δ. By the previous observation that 𝐏(δ)≈0 for unbalanced sequences, we can conclude that 𝐏(α_i) ≈∑_δ∈ℬ_α_i𝐏(δ), with ℬ_α_i = Δ_α_i∩ℬ. So each remaining conjunct α_i ≿β_i in the antecedent can be rewritten as ∑_δ∈ℬ_α_i𝐏(δ) ≿∑_δ∈ℬ_β_i𝐏(δ). By axiom 2𝖢𝖺𝗇𝖼 we can combine all of these sums together, producing an inequality of the form: ∑_δ∈ℬ_α_1𝐏(δ) + … + ∑_δ∈ℬ_α_n-1𝐏(δ) ≿ ∑_δ∈ℬ_β_1𝐏(δ) + … + ∑_δ∈ℬ_β_n-1𝐏(δ). The crux of the proof is now to consider which terms will be cancelled from both sides (once again appealing to 𝖢𝗈𝗇𝗍𝗋). Any state description δ∈ℬ_β_n will appear once more on the left than on the right, so 𝐏(δ) will still remain as a summand on the left. All other terms on the left will be cancelled. Similarly for the right, we will end up with the sum of terms for state descriptions in ℬ_α_n, so that (<ref>) becomes simply ∑_δ∈ℬ_β_n𝐏(δ) ≿ ∑_δ∈ℬ_α_n𝐏(δ), or, in other words, by the observations above, our desired consequent β_n ≿α_n. The next observation shows that de Finetti's original proposal captures the first three levels of finite cancellation: 𝖰𝗎𝖺𝗌𝗂 and 𝖥𝗂𝗇𝖢𝖺𝗇_3 are equivalent over 𝖠𝖷_base. § APPENDIX: QUADRATIC QUALITATIVE PROBABILITY STRUCTURES FOR N=2 The Domotor axioms 𝖰1–𝖰4, the n=2 instance of 𝖰5, and 𝖰6 are complete for models with a binary event space Ω = {0, 1}. We aim to show there is a positive symmetric bilinear functional Φ: ℝ^2 ×ℝ^2 →ℝ representing the order that is additionally rank 1, and therefore factors as a product of measures. Q1–Q4, Q6 give everything except rank 1. The matrix of such a rank 1 is, up to normalization, either [ 1 x; x x^2 ] or [ 0 0; 0 1 ] where x ≥ 0. Let 0 = {0} and 1 = {1}. The latter matrix represents the total preorder (∅, ·) ∼ (0, 0) ∼ (0, Ω) ∼ (0, 1) ≺ (1, 1) ∼ (1, Ω) ∼ (Ω, Ω) where the · in (∅, ·) stands for any (or every) one of the four subsets of Ω. In the case of the former matrix, for A, B ⊂Ω we find Φ(1_A, 1_B) = (1+x^2) ∑_a ∈ A𝔵(a) ∑_b ∈ B𝔵(b) where the term 𝔵(ω) for ω∈Ω is defined to be 1 if ω = 0 and x if ω = 1. Considering the various (unordered) choices of A, B, we see that (<ref>) gives 10 polynomials in x, in fact 7 if we observe that all corresponding to (∅, ·) are identically 0. Below we plot these curves as a function of x. Without loss we have divided each by the prefactor 1+x^2 appearing in (<ref>). < g r a p h i c s > By inspecting[This can be made fully rigorous via e.g. Sturm sequences.] the intersections and their induced subdivisions in the first quadrant[The order (<ref>) arising from the degenerate matrix [ 0 0; 0 1 ] corresponds to x = +∞ on the plot.] we count 8 total preorders (∅, ·) ∼ (1, Ω) ∼ (0, 1) ∼ (1, 1) ≺ (0, 0) ∼ (0, Ω) ∼ (Ω, Ω) (∅, ·) ≺ (1, 1) ≺(0, 1) ≺ (1, Ω) ≺ (0, 0) ≺ (0, Ω) ≺ (Ω, Ω) (∅, ·) ≺ (1, 1) ≺ (0, 1) ≺ (0, 0) ∼ (1, Ω) ≺ (0, Ω) ≺ (Ω, Ω) (∅, ·) ≺ (1, 1) ≺ (0, 1) ≺ (0, 0) ≺ (1, Ω) ≺ (0, Ω) ≺ (Ω, Ω) (∅, ·) ≺ (0, 0) ∼ (1, 1) ∼ (0, 1) ≺ (0, Ω) ∼ (1, Ω) ≺ (Ω, Ω) (∅, ·) ≺ (0, 0) ≺ (0, 1) ≺ (1, 1) ≺ (0, Ω) ≺ (1, Ω) ≺ (Ω, Ω) (∅, ·) ≺ (0, 0) ≺ (0, 1) ≺ (1, 1) ∼ (0, Ω) ≺ (1, Ω) ≺ (Ω, Ω) (∅, ·) ≺ (0, 0) ≺ (0, 1) ≺ (0, Ω) ≺ (1, 1) ≺ (1, Ω) ≺ (Ω, Ω) Thus we have 9 total preorders (<ref>), (<ref>)–(<ref>) representable by a rank 1. We claim these are exactly the ones satisfying the stated Domotor axioms. The soundness direction is straightforward while the completeness direction can be shown via casework. Note that the n=2 instance of 𝖰5 is just monotonicity (in both arguments, by symmetry), namely that (A, B) ≼ (A, C) ⇒ (D, B) ≼ (D, C) and analogously in the first argument. This presumes that (∅, Ω) ≺ (A, B) and (∅, Ω) ≺ (D, C). We also have strict monotonicity. Below we will generally take 𝖰1–𝖰4 for granted, which just amount to the axioms of a nondegenerate total preorder, symmetric on the pairs. Our cases are: * If (0, Ω) ≺ (1, Ω): * If (∅, Ω) ∼ (0, 0): conclude order (<ref>), by 𝖰5 and 𝖰6. * If (∅, Ω) ≺ (0, 0): by 𝖰5 conclude (0, 0) ≺ (0, 1) ≺ (1, 1). By 𝖰6 conclude (1, Ω) ≺ (Ω, Ω). * If (1, 1) ≺ (0, Ω): order (<ref>) * If (1, 1) ∼ (0, Ω): order (<ref>) * If (1, 1) ≻ (0, Ω): order (<ref>) * If (0, Ω) ∼ (1, Ω) conclude order (<ref>), by repeated applications of 𝖰5 and 𝖰6. * If (0, Ω) ≻ (1, Ω): * If (∅, Ω) ∼ (1, 1): conclude order (<ref>) by 𝖰5, 𝖰6. * If If (∅, Ω) ≺ (1, 1): by 𝖰5 conclude (1, 1) ≺ (0, 1) ≺ (0, 0). By 𝖰6 conclude (0, Ω) ≺ (Ω, Ω). * If (0, 0) ≺ (1, Ω): order (<ref>) * If (0, 0) ∼ (1, Ω): order (<ref>) * If (0, 0) ≻ (1, Ω): order (<ref>) § OUTLINE * Discuss qualitative-quantitive in general * Discuss wrt probability in particular. Presupposition: roughly, a logic is not quantitative if it can be given a semantics that is not explicitly numerical. * Motivate alternative division in probability logics: mere addition and addition + multiplication. This is where we introduce a space of languages. * Start with additive case, then look at comparative case. Discuss additive systems wrt axiomatization and complexity. * Start with polynomial case, then look at conditional case. Discuss multiplicative systems wrt axiomatization and complexity. ToDo: * Show that comparative conditional probability is not finitely axiomatizable. * Finish completeness proof for polynomial calculus. * Finish complexity sections. * Ensure uniformity of notation and style. * Write the intro and later discussion. Results that would be nice, but not absolutely necessary: * Axiomatize the quadratic and/or comparative language. Either show Q5, Q6 are complete, or identify some powerful rule (like the polarization rule for comparative probability) that establishes it. Clearer analysis of the arithmetical content of Q5. § SOURCES ON QUALITATIVE/QUANTITATIVE * de Finetti, https://www.stat.unm.edu/ ronald/courses/Int_Bayes/definetti_exchangeability.pdf“Foresight” (1937): “While starting out from a purely qualitative system of axioms, one arrives at a quantitative measure of probability” (101) “The [present] axiomatization has the advantage of permitting a deeper and more detailed analysis, of starting out with only qualitative notions, and of eliminating the notion of `money', foreign to the question of probability” (102) * Koopman, https://www.jstor.org/stable/1969003“Axioms and Algebra of Intuitive Probability” (1940) “Such a number is in no wise a self-evident concomitant with or expression of the primordial intuition of probability, but rather a mathematical construct derived from the latter under very special conditions and as the result of a fairly complicated process implicitly based on many of the very intuitive assumptions which we are endeavouring to axiomatize” (269) * Fine, https://www.elsevier.com/books/theories-of-probability/fine/978-0-12-256450-5Theories of Probability (1973), Chapter 2 “Grounds for a greater interest in this neglected concept include the following points. * CP [comparative probability] provides a more realistic model of random phenomena when we have too little prior information and data to estimate quantitative probability reasonably. * CP provides a wider class of models of random phenomena than does the usual quantitative theory. * CP illuminates the structure of quantitative probability, and especially the Kolmogorov axioms, by providing a base from which to derive quantitative probability. * CP appears to be a sufficiently rich concept to support a variety of significant applications.” * Narens, https://link.springer.com/article/10.1007/BF00247745“On qualitative axiomatizations for probability theory” (1980) “The qualitative [here: comparative] approach provides a powerful method for the scrutinization and revelation of underlying assumptions of probability theory, is a link to empirical probabilistic concerns, and is a point of departure for the formulation of alternative probabilistic concepts.” * Suppes, https://suppes-corpus.stanford.edu/sites/g/files/sbiybj7316/f/qualitative_theory_of_subjective_probability_321.pdf“Qualitative Theory of Subjective Probability” (1994) “The spirit of these axioms is to place restraints on qualitative judgments of probability which will be sufficient to prove a standard representation theorem, i.e. to guarantee the existence of a numerical probability measure in the standard sense. From this standpoint the axioms may be regarded as a contribution to the theory of measurement with particular reference to comparative judgments of probability. The central question for such a set of axioms is how complicated must be the condition on the qualitative relation more probable than in order to obtain a numerical probability measure over events. The intuitive idea of using a comparative qualitative relation is that individuals can realistically be expected to make such judgments in a direct way, as they cannot when the comparison is required to be quantitative. ” (18) * Kyburg (e.g., in https://www.sciencedirect.com/science/article/pii/S1570868303000107this paper) considers qualitative the realm of all-out belief, and quantitative the realm of degrees-of-belief. Maybe distinction between degree-of-belief and belief-in-degrees is helpful here? Also a very interesting discussion of measurement and the quantitative-qualitative leap in his https://www.cambridge.org/us/academic/subjects/philosophy/logic/theory-and-measurementTheory and Measurement (relevant references to Carnap and Hempel there too). * Jaynes (Probability: The Logic of Science, 2003) treats judgments of the form A|B ≻ C|D, and much more, as qualitative. * Hawthorne & Makinson, https://link.springer.com/article/10.1007/s11225-007-9061-x“The Quantitative/Qualitative Watershed” (2007): “Broadly speaking, there are two ways of approaching the formal analysis of uncertain reasoning: quantitatively, using in particular probability relationships, or by means of qualitative criteria. As is widely recognized, the consequence relations that are generated in these two ways behave quite differently.” “A central theme of this paper is to identify rules that mark a watershed between logics for probabilistically defined consequence relations and those defined qualitatively.” (268) * Delgrande et al, https://www.sciencedirect.com/science/article/pii/S0004370218301036“The Logic of Qualitative Probability” (2019) “Fagin et al. (1990) provide a quantitative (as opposed to qualitative) approach to reasoning about probability. Their approach is expressed at a much higher level, and assumes the existence of integers, as well as addition and multiplication.” “Unlike Fagin et al., our approach is qualitative and we do not employ the machinery of arithmetic in expressing our proof theory.” “What distinguishes qualitative from quantitative probability (truth valued) logics is that qualitative probability logics do not employ quantities or arithmetic operations in the syntax, and the informal reading of the qualitative probability formulas do not require a quantitative interpretation.” However, both qualitative and quantitative probability logics share a formal semantics involving real (quantitative) probability. In the case of a qualitative probability logic, the axiomatization thus is shown (via the soundness and completeness results) to faithfully reflect probabilistic principles, in that for a consistent set of assertions, there is guaranteed to be a corresponding realising probability distribution.” * Other recent papers on comparativism about (partial) belief: * Stefánsson, “https://www.tandfonline.com/doi/full/10.1080/00048402.2016.1224906What is `real' in probabilism" (2015), “https://www.tandfonline.com/doi/full/10.1080/00048402.2017.1349159On the ratio challenge for comparativism” (2018) * Elliott, “https://link.springer.com/article/10.1007%2Fs10670-020-00329-xComparativism and the measurement of partial belief” (2020) (The latter also talks about what might be called quaternary comparativism. See Fn. 3) * Many of these papers are concerned with the “ratio challenge” of saying how to make sense of claims about rational comparisons of likelihood. Is this not exactly what we get with ℒ_add? Fine (“https://link.springer.com/chapter/10.1007%2F978-94-017-0837-1_8An argument for comparative probability”), Stefánsson, and others have gestured at a version of this. Elliott argues against it. One of the concerns is about the meaningfulness of interpersonal comparisons involving the “addition-like” operator: “Before we could know what it means for Sally to believe P twice as much as Q, we would first have to take into account her entire confidence ranking, work out what the relevant operation should be, and only then give some doxastic meaning to the statement.” Elliott's preferred response is to supplement comparativism with something else, such as preference. Interestingly, he also mentions relations of evidential support, presumable claims like 𝐏(α|β)≻𝐏(α), which brings us into the multiplicative setting. Why can we not just take rational comparisons as primitive the same way we do unqualified comparisons? Concerning the quotation, I see no reason we need to know the entire confidence ranking; just as in pure comparative case, we have constraints that cannot be violated. Give your betting interpretation for those rational comparisons, or whatever else you like, to make interpersonal comparisons possible. It seems the underlying logic of the judgments stands by itself. * Other recent authors that invoke comparative/qualitative probability: Conor Mayo-Wilson, Nicholas DiBella, Benjamin Eva, Jason Konek, Branden Fitelson, Yang Liu … § EXTENDED ABSTRACT Within logical studies of probability, the quantitative–qualitative distinction is often taken to track the presence or absence of explicit arithmetical operations in the underlying language. Paradigm examples of qualitative probability-logical systems are comparative probability, concerning probability comparisons captured by statements of the form α≿β and introduced in its modern formulation by de Finetti (1931), and comparative conditional probability, involving conditional comparisons α|γ≿β|δ, first studied in depth by Koopman (1940). It has often been argued—on empirical, computational, or cognitive grounds—that such purportedly qualitative systems are to be preferred to their quantitative counterparts in many areas of application. In the present contribution we put pressure on these arguments by appeal to new logical results about these systems, namely around computational complexity and issues of axiomatization. We propose, explore, and promote an alternative division in the space of probability logics, based on the distinction of whether a system (implicitly or explicitly) encodes only additive reasoning over numbers in the real unit interval, or additive together with multiplicative reasoning. Our first set of results demonstrates that this division coincides with a meaningful and robust complexity divide. It is already known that probability logics incapable of encoding any non-trivial multiplicative constraints are routinely NP-complete; this includes not only pure comparative probability, but also probability logic with explicit addition over terms (e.g., Fagin et al. 1990). We show that allowing even a modicum of multiplication leads robustly to systems whose satisfiability problem is complete for the class ∃ℝ, that is, the complexity class corresponding to satisfiability in the existential theory of the reals. This class is believed to be strictly harder than NP, and known unconditionally to lie between NP and PSPACE. At one end of the spectrum, we show that this phenomenon occurs in a paradigmally quantitative language of arbitrary polynomials over probability terms, with explicit addition and multiplication. But perhaps surprisingly, we also obtain ∃ℝ-completeness for seemingly much simpler (“qualitative”) systems like comparative conditional probability, and even a very basic language with equality between probability terms and atomic conditional independence statements. In essence, even such syntactically quite restricted languages harbour enough structure to admit reduction of arbitrary real polynomial queries. These complexity results already bear on issues of practical relevance. In any applied domain where conditional information is relevant—(automated) causal discovery and inference being a notable example—there is no loss from a complexity perspective in employing a rich quantitative language. What we gain in expressive power and intuitive ease of use, we do not actually lose in complexity. We next turn to questions of axiomatization. Within the class of additive systems, probability logic built over linear inequalities is known to possess a simple, finite axiomatization (e.g., Fagin et al. 1990): application of the well-known Fourier-Motzkin transposition theorem can essentially be encoded into principles of the logic. By contrast, the only known axiomatizations of pure comparative probability employ infinitely many axiom schemes, tracing back to seminal ideas of Kraft et al. (1959) and Scott (1964). Appealing to previous results by Fishburn (1998) and a variation on a theorem of Vaught (1954), we show that this situation is inevitable: unlike its (equally complex) explicitly quantitative counterpart, there can be no finite axiomatization of comparative probability. Finally, we show that the same pattern arises in the ∃ℝ class (the probability logics incorporating some form of mutliplication). Our first main result here is that probability logic for the full language of polynomials over probability terms can be given a simple, finite axiomatization. Rather than appealing to Fourier-Motzkin, here we draw upon the Positivstellensatz for semi-algebraic systems (Krivine 1964), showing that a small set of intuitive principles suffices to derive a contradiction from an unsatisfiable set of inequalities. Meanwhile, for comparative conditional probability, we extend the results above for the additive case to show that there can again be no finite axiomatization. Rather, in such a restricted language it appears that we need to compensate for the lack of explicit arithmetical operations through an infinite array of increasingly long principles. The upshot of our results is that, at least from a logical point of view, there is little to recommend the restriction to qualitative systems (viz. systems lacking syntax for any numerical operator). Within both additive systems and additive-multiplicative systems, the same picture emerges: augmenting a qualitative system with explicit arithmetical primitives results in both greater expressive power and simpler, finite axiomatizations, all at no additional cost in computational complexity. In addition to these conceptual and application-oriented lessons, the work here offers a more complete and meaningfully regimented picture of the larger space of probability logics.
http://arxiv.org/abs/2307.05338v1
20230710173004
Root Causal Inference from Single Cell RNA Sequencing with the Negative Binomial
[ "Eric V. Strobl" ]
q-bio.GN
[ "q-bio.GN" ]
]Root Causal Inference from Single Cell RNA Sequencing with the Negative Binomial Accurately inferring the root causes of disease from sequencing data can improve the discovery of novel therapeutic targets. However, existing root causal inference algorithms require perfectly measured continuous random variables. Single cell RNA sequencing (scRNA-seq) datasets contain large numbers of cells but non-negative counts measured by an error prone process. We therefore introduce an algorithm called Root Causal Inference with Negative Binomials (RCI-NB) that accounts for count-based measurement error by separating negative binomial distributions into their gamma and Poisson components; the gamma distributions form a fully identifiable but latent post non-linear causal model representing the true RNA expression levels, which we only observe with Poisson corruption. RCI-NB identifies patient-specific root causal contributions from scRNA-seq datasets by integrating novel sparse regression and goodness of fit testing procedures that bypass Poisson measurement error. Experiments demonstrate significant improvements over existing alternatives. [ Eric V. Strobl ================== § INTRODUCTION Causal inference algorithms identify causal relations from data. Most investigators infer causation using randomized controlled trials (RCTs). However, an RCT cannot distinguish between a cause and a root cause of disease, or the initial perturbation to a biological system that ultimately induces a diagnostic label. Identifying the root causes of disease is critical for (a) understanding disease mechanisms and (b) discovering drug targets that treat disease at its biological onset. Single cell RNA sequencing (scRNA-seq) datasets represent prime targets for root causal inference because they provide global but fine-grained snapshots of gene expression with ample numbers of cells. scRNA-seq also provides a functional read-out more proximal to the clinical phenotype than single nucleotide polymorphisms. Accurately inferring patient-specific root causes from scRNA-seq therefore has the potential to improve the discovery of novel therapeutic targets that significantly impact patient symptoms. Unfortunately, most existing root causal inference algorithms assume perfectly measured, continuous random variables <cit.>. Sequencing datasets contain counts measured by an error-prone sequencing process. Moreover, modern single cell pipelines cannot replicate the same measurements per cell <cit.>. Customized methods that appropriately account for non-negativity and measurement error – without relying on technical replicates – have the potential to substantially improve the performance of existing methods from scRNA-seq. The negative binomial distribution models counts as a mixture of the Poisson and gamma distributions, where the Poisson component can represent measurement error and the gamma distribution the expression level of an RNA molecule. The negative binomial also fits scRNA-seq data well by accounting for overdisperson and a high proportion of zeros <cit.>. As a result, many scientists analyze RNA-seq data with the negative binomial in the context of regression, normalization or differential hypothesis testing <cit.>. None however have utilized the negative binomial for identifying the root causes of disease. [breakable,enhanced,frame hidden] We therefore extend the negative binomial to root causal inference as follows: * We propose a post-nonlinear causal model with gamma distributed error terms representing true continuous RNA expression levels. We can however only measure the expression levels as counts using a noisy sequencing process, which we model by the Poisson. The resultant Poisson-gamma mixture is the negative binomial. * We introduce a negative binomial regression procedure and goodness of fit hypothesis test that both bypass Poisson measurement error without technical replicates. * We integrate the regression procedure into an algorithm that identifies the parameters of the gamma distributions and latent causal graph. * We finally utilize the recovered parameters to identify the root causes of disease unique to each patient. The resultant method called Root Causal Inference with Negative Binomials (RCI-NB) identifies patient-specific root causes of disease more accurately than existing alternatives from both simulated and real scRNA-seq datasets. § BACKGROUND §.§ Structural Equations We can formalize causal inference under the framework of structural equation models (SEMs), or a set of deterministic equations over p random variables X such that: X_i = f_i(Pa(X_i),E_i), ∀X_i ∈X. The random vector E denotes a set of mutually independent error terms, and Pa(X_i) ⊆X∖X_i the parents, or direct causes, of X_i. We can therefore associate a directed graph 𝔾 over X to an SEM by drawing a directed edge from each member of Pa(X_i) to X_i. A directed path in 𝔾 from X_i to X_j refers to a sequence of adjacent directed edges from X_i to X_j. We say that X_i is an ancestor of X_j in 𝔾 if there exists a directed path from X_i to X_j or X_i = X_j; similarly, X_j is a descendant of X_i. A cycle occurs when X_i is an ancestor of X_j and we have X_j →X_i. We call 𝔾 a directed acyclic graph (DAG) if it contains no cycles. The joint distribution ℙ_X over X satisfies the causal Markov condition if every variable in X is independent of its non-descendants given its parents. Furthermore, ℙ_X is causally minimal if it satisfies the causal Markov condition relative to 𝔾 but not to any proper sub-graph of 𝔾. §.§ Related Work Root causal analysis refers to a suite of methods designed to detect the root causes of undesired outcomes, typically in man-made systems within the industrial or healthcare industry <cit.>. The methods require a painstaking manual approach that implicitly or explicitly reconstructs the underlying causal graph. Strategies also rely on participants with deep knowledge of the underlying causal processes and therefore falter when applied to biological systems that remain largely unknown. A second line of work takes a similar approach by assuming a known set of structural equations but formalizes root causal analysis using the error terms of SEMs <cit.>. These works unfortunately do not define patient-specific root causes of disease properly. For example, Root Causal Analysis of Outliers recovers root causal contribution scores for symptoms that are worse than the symptoms of a given patient <cit.>. We do not want to eliminate just the worse symptoms of a patient, but all of his symptoms. Attempting to correct the method with a predetermined cut-off score unfortunately foregoes patient-specificity. The Model Substitution algorithm proposed in <cit.> also loses specificity by identifying the root causes of changes in the marginal distribution of the diagnosis. Moreover, both MS and RCAO assume that the user has knowledge of the structural equations and the “normal” counterfactual distributions of the error terms. The methods further require that the diagnosis correspond to a noiseless cutoff score, even though a diagnosis is noisy because it depends on the diagnostician in practice. RCAO and MS therefore utilize improper definitions of patient-specific root causes of disease and require a noiseless label, a known SEM as well as known counterfactual error term distributions. A third line of work instead identifies patient-specific root causes of disease using the conditional distribution of the diagnosis given the error terms. The authors do not require access to the underlying structural equations or error term distributions. <cit.> performed independent component analysis (ICA) on electronic health record data and correctly recovered the top five root causes of hepatocellular carcinoma. The approach achieved clinical face validity, but the authors did not connect the strategy to causality. <cit.> later extended the idea to root causal analysis and introduced a more efficient algorithm called Root Causal Inference (RCI). The same authors later created a related procedure for handling latent confounding <cit.>. All three of these algorithms assume linear relationships and continuous additive noise. Investigators thus later extended the work to the non-linear setting with the heteroscedastic noise model that allows non-linear conditional expectations and variances <cit.>. Unfortunately, even the non-linear approach assumes continuous random variables and no measurement error. The above algorithms therefore perform poorly when directly run on scRNA-seq datasets. We improve on the aforementioned works by introducing an algorithm called Root Causal Inference with Negative Binomials (RCI-NB) that accounts for the measurement error and counts of scRNA-seq by bypassing the Poisson. The algorithm utilizes novel simulation-based regression and goodness of fit testing procedures. RCI-NB automatically recovers all parameters needed for the simulations using a top-down procedure introduced in Section <ref>. As a result, the algorithm requires no prior knowledge about the underlying structural equations or counterfactual distributions. Furthermore, RCI-NB allows a noisy label and maintains patient-specificity by identifying changes in the conditional distribution of the diagnosis. § NEGATIVE BINOMIAL MODEL We begin the development of RCI-NB by introducing a negative binomial SEM. We model the expression levels of RNA molecules X using the following post non-linear SEM: X_i = exp( Xβ_· i ) E_i = exp( Xβ_· i + ln(E_i)), for each X_i ∈X similar to Equation (<ref>), where post non-linearity refers to the outer exponentiation. Exponentiation ensures that all variables in X are positive and enforces faithfulness to the inverse canonical link function of the negative binomial generalized linear model <cit.>. The entry β_ji≠ 0 if and only if X_j ∈Pa(X_i). We write β_· i to refer to the i^th column of β, and β_A i to rows associated with A⊆X in the i^th column. <cit.> proved full identifiability of β except in a few scenarios not applicable to this work. Many RNA molecules have low expression levels. The gamma distribution places larger probability mass near zero than the log-normal with equal mean and variance. We therefore further assume that each E_i ∈E follows the gamma distribution Γ(r_i,r_i/exp(Pγ_· i)) with shape r_i and rate r_i/exp(Pγ_· i). The set P contains q binary variables each indicating a patient from which we harvest cells. The error terms are therefore mutually independent given P, or within each patient. We unfortunately cannot observe X in practice. Sequencing technologies instead approximate the expression level of each RNA by reverse transcribing and amplifying the molecules. Most technologies then count the number of complementary DNA sequences that align to a reference genome <cit.>. As a result, sequencing technologies such as scRNA-seq only approximate reference RNA expression levels by counts <cit.>. The efficiency of the above process may differ between cells depending on e.g., cell diameter and the amount of reagents used. We can also only detect a small proportion of RNA molecules existing in a cell in general <cit.>. We therefore apply the law of rare events <cit.> and henceforth assume that we observe Poisson-corrupted counts X with each X_i ∈X drawn according to: X_i ∼Pois(X_i C) = (X_i C)^X_iexp(-X_i C)/X_i!. The random variable C>0 denotes the cell-specific efficiencies of the sequencing process. The efficiencies differ due to the technology – not due to the biological system modeled by Equation (<ref>). We can therefore approximate C to high accuracy by a variety of control methodologies such as estimated library sizes or RNA spike-ins <cit.>. Recall that X_i =exp(Xβ_· i)E_i from Equation (<ref>). We derive the conditional distribution of X_i given Pa(X_i) ∪P∪ C by marginalizing out Γ(r_i,r_i). The resultant Poisson-gamma mixture, or negative binomial distribution, obeys the probability mass function: ℙ(X_i | Pa(X_i), P,C) = Γ(X_i+r_i)/Γ(X_i + 1) Γ(r_i)( r_i/r_i+μ_i)^r_i( μ_i/r_i+μ_i)^X_i. with dispersion parameter r_i ∈r, conditional expectation μ_i=exp(Xβ_· i + Pγ_· i)C and variance μ_i + 1/r_iμ_i^2. Several groups have shown that the quadratic variance accurately accounts for the overdispersion seen in real scRNA-seq data <cit.>. We will drop the subscripts of r_i and μ_i to prevent notational cluttering, when it is clear that we focus on one X_i ∈X. In summary, we assume X follows the fully identifiable SEM in Equation (<ref>) with gamma distributed error terms. We however can only observe X with Poisson measurement error – denoted by X. We now seek to recover (β,r,γ) from X∪P∪ C alone using negative binomial regression and goodness of fit testing, which we describe in the next two sections. § NEGATIVE BINOMIAL REGRESSION §.§ Corrected Score Equations We first develop a negative binomial regression procedure that bypasses Poisson measurement error among the predictors. Most existing negative binomial regressors erroneously assume perfectly measured predictors or only Gaussian measurement error <cit.>. We in particular seek to regress X_i on P and the perfectly measured non-descendants of X_i, denoted by A⊆X∖X_i, but only have access to the Poisson corrupted counterparts A. Let Z = (A/C, P) and Z = (A, P). Further let α = (β_A i, γ_· i)^T. We can then write the logarithm of the negative binomial probability mass function L(α, r) as follows: lnΓ(X_i + r)/Γ(X_i + 1) Γ(r) + r ln(r) + X_i Zα - (X_i + r) ln(r+μ). Directly maximizing the expectation of the above expression requires access to Z. <cit.> showed that, if we can construct a corrected function L(α,r) where: 𝔼[L(α,r)|X_i, Z,C] = L(α,r), then maximizing the (unconditional) expectation of L(α,r) still yields unbiased estimates of α and r. Observe that 𝔼(X_iZ | X_i, Z,C) = X_iZ for the term X_i Zα in Expression (<ref>), so L(α,r) satisfies: lnΓ(X_i + r)/Γ(X_i + 1) Γ(r) +r ln(r) + X_iZα - f(X_i, Z, C), such that 𝔼(f|X_i, Z,C) = (X_i + r) ln(r+μ) for some function f. We find f difficult to derive analytically. We can however simplify (X_i + r) ln(r+μ) in Expression (<ref>) as follows: 𝔼_X_i Z C((X_i + r) ln(r+μ) ) =𝔼_ZC(𝔼_X_i|ZC (X_i + r) ln(r+μ)) = 𝔼_ZC((μ + r)ln(r+μ)), so that we can approximate the last expectation by averaging over s samples drawn from the density p(Z,C)=p(Z)p(C). We will show how to estimate p(Z,C) from data in Section <ref>. We therefore equivalently consider the following corrected function L(α,r): lnΓ(X_i + r)/Γ(X_i + 1) Γ(r) +r ln(r) + X_iZα - 𝔼((μ + r)ln(r+μ)), which satisfies Equation (<ref>) as required. We then set the expectation of the derivatives of L(α,r) to zero: α : 𝔼(X_i Z) - 𝔼(μZ) = 0, r : 𝔼ψ(X_i + r) - ψ(r) + ln(r) - 𝔼ln(r + μ) = 0, where ψ denotes the digamma function. We replace the expectations with sample averages and quickly obtain the roots (α_n,r_n) = θ_n of the corresponding score equations with n samples by the Newton-Raphson method. Let θ_0 denote the ground truth parameter values. The proposed approach achieves asymptotic normality: (Asymptotic normality) Assume n →∞, s →∞ and n/s → 0. Further assume that Var(μZ, ln(r+μ)) and Σ = -𝔼 S^'(θ_0) are positive definite. Then √(n)(θ_n - θ_0) →𝒩(0,Σ^-1(J_1 + J_2 + J_3) Σ^-1). We define S^', J_1, J_2 and J_3 as well as detail longer proofs in the Appendix. §.§ Regularization Causal graphs in biology are frequently sparse, so we next introduce sparsity promoting regularization into the above negative binomial regressor. The equation for α in Equation (<ref>) does not depend on r and is asymptotically equivalent to the score equation of the negative binomial with r fixed. Recall that the negative binomial is a member of the exponential family with r fixed. We therefore introduce regularization via the Bayesian information criterion (BIC) score <cit.>. Let L_j(α,r) denote the corrected log-likelihood for sample j. We maximize: (1/n∑_j=1^n L_j(α,r)) - λ_n/2 (β_A i_0 + γ_· i - γ_· i_2^2), where λ_n = ln(n)/n according to BIC and γ_· i = 1/q∑_k=1^q γ_ki. We optimize the above expression quickly by customizing the expectation-maximization (EM) approach proposed in <cit.>. The following equivalence relation holds: β_A i_0 = ∑_j ∈ Rβ_j i^2 /β_j i^2 = ∑_j ∈ Rβ_j i^2/η_j^2 = β_Ri/η_R^2_2, where η = (|β_A i|,1) and R indexes the non-zero elements in β_A i. We collect β_R i into the first |R| entries of β_A i for ease of notation. The ones in η correspond to γ_· i. Assume now that η is a latent variable. The EM algorithm successively approximates α by iterating between expectation: (1/n∑_j=1^n L_j(α,r)) - λ_n/2( β_Ri/η_R^2_2 + γ_· i - γ_· i_2^2 ), and maximization via the equation: 1/n∑_j=1^n x_ijz_j - 1/s∑_j=1^s μ_jz_j - λ_n ( β_Ri/η_R, 0,γ_· i - γ_· i) = 0. The zero vector on the left hand side corresponds to elements not in R. The above equation is potentially unstable due to division by entries in η_R close to zero. Further, we do not know the indices R in practice. We resolve both of these issues by element-wise multiplying both sides of the score equation by η and instead solve: (1/n∑_j=1^n x_ijz_j - 1/s∑_j=1^s μ_jz_j ) ⊙η - λ_n (β_A i, γ_· i - γ_· i) = 0. We summarize the EM algorithm in Algorithm <ref>; it almost always converges with a finite number of samples in practice. § GOODNESS OF FIT TEST We have thus far assumed that p(X_i|Z,C) indeed follows a negative binomial distribution. We now address the problem of determining whether the negative binomial distribution holds in this section by constructing a score-based goodness of fit test. Assume for now that we have access to Z∪ C in order to compute μ. We construct a hypothesis test with a flexible order-k alternative probability mass function: p_k(X_i|Z,C) = N(h,ϕ,μ, r) exp( ∑_j=1^k h_j(X_i,μ, r) ϕ_j ) p_0(X_i|Z,C), where p_0(X_i|Z,C) denotes the negative binomial probability mass function under the null hypothesis. We now suppress the inputs to some functions for cleaner exposition. The function exp( ∑_j=1^k h_j ϕ_j ) =exp(hϕ) is non-negative and equal to one under the null hypothesis that ϕ = 0, or when the negative binomial p_0(X_i|Z,C) holds. The normalizing function N ensures that p_k integrates to one. Each function h_j must have zero expectation under the null hypothesis, denoted by 𝔼_0(h_j)=0. We will show how to intelligently choose such functions shortly. We now take the expectation of the logarithm of p_k. The normalizing function N has derivative ∂log N/∂ϕ equal to -𝔼_k(h), or the negative expectation under the order-k alternative (Lemma 4.2.1 in <cit.>). As a result, ∂log N/∂ϕ = 0 under the null hypothesis, so we can write the population score equation with respect to ϕ under the null as 𝔼_0(h) = 0. This implies that: U = 1/n∑_j=1^n h_j^T Π^-1 h_j χ^2_k, by the central limit theorem. Here, we index the samples of h rather than its entries. The matrix Π denotes the sample covariance matrix of the vector h, which we will describe soon. The χ^2 test loses power with too many functions in h and may not converge to the asymptotic distribution fast enough with large variances for realistic sample sizes. We therefore fix k=2 and utilize the bounded functions h_j = m_j - 𝔼(m_j | μ, r), where m_1 = exp(-X_i) ∈ (0,1] and m_2 = sin(X_i) ∈ [-1,1]. The conditional expectations of m_1 and m_2 admit closed forms under the negative binomial: 𝔼(m_1 | μ, r) = exp(r) (r/((exp(1) - 1) μ + exp(1) r))^r, 𝔼(m_2 | μ, r) = ir^r/2((-exp(-i)μ + μ + r)^-r - (-exp(i)μ + μ + r)^-r), where i = √(-1). Recall that we estimate (α, r) = θ in practice by NB-EM. Boos <cit.> used a first-order Taylorian expansion to account for the non-maximum likelihood estimation using an adjusted covariance matrix Π equal to: B_ϕϕ - A_ϕθA_θθ^-1B_θϕ - B_ϕθ(A_θθ^-1)^TA_ϕθ^T + A_ϕθA_θθ^-1B_θθ(A_θθ^-1)^TA_ϕθ^T where: S = ( h_1-𝔼(m_1 | μ, r), h_2-𝔼(m_2 | μ, r), X_iZ-μZ)^T, A = -𝔼( ∂ S/∂ (ϕ,θ)), B = 𝔼(S S^T ). We must finally account for the fact that we observe Z but simulate Z. We split the functions in S into two groups: S_Z = ( h_1, h_2, X_iZ)^T, S_Z = ( -𝔼(m_1 | μ, r), -𝔼(m_2 | μ, r), -μZ)^T, yielding the new matrices: A = -𝔼_Z( ∂ S_Z/∂ (ϕ,θ)), B = 𝔼_Z(S_ZS^T_Z) +𝔼_Z(S_ZS_Z^T ). We then reject the null hypothesis that the negative binomial holds when the statistic U in Equation (<ref>) falls above the critical value determined by the Type I error rate. § CAUSAL INFERENCE §.§ Parameter Estimation We have thus far created a sparse negative binomial regressor and score-based goodness of fit test that both bypass Poisson measurement error. They however require access to p(Z,C). We now design an algorithm called Recover Parameters (RP) that utilizes regression and goodness of fit testing to systematically identify the β, r and γ parameters of p(Z,C). We summarize RP in Algorithm <ref>. RP performs causal discovery in a top-down fashion; the algorithm discovers the roots, then the children of the roots, and so forth. The algorithm first fits negative binomial distributions on each random variable given P∪ C in Line <ref>. RP then tests whether each variable follows a negative binomial in Line <ref>. If a variable does, then RP places it into A and eliminates it from X in Line <ref>. We eliminate the variable from X with the smallest U statistic in practice to avoid dependence on a pre-specified Type I error rate. When the negative binomial holds, the set (β_A i,r_i,γ_· i) obtained in Line <ref> contains the gamma distribution parameters of E_i because E_i ∼Γ(r_i,r_i/exp(Pγ_· i)). RP can therefore simulate values from p(A) in Line <ref> by drawing P as well as the samples of the corresponding error terms of A, and then passing the values through the structural equations associated with A in Equation (<ref>). RP can now discover the children of the roots by independently sampling C by bootstrap, regressing on A∪P∪ C and testing whether the model fits a negative binomial. The algorithm repeats the above process of simulation, sparse regression and goodness of fit testing until it moves all variables from X into A. The algorithm is formally sound and complete: (Identifiability) If ℙ_X|P is causally minimal and X_i ∼Pois(X_i C) for each X_i ∈X, then RP recovers (β,r,γ) with regression and goodness of fit oracles. We cannot reach the conclusion directly from the results of <cit.> due to the Poisson measurement error. We therefore instead prove the theorem in the Appendix using an overdispersion score developed for quadratic variance functions <cit.>. §.§ Root Causal Contributions We would like to utilize the recovered parameters from RP to identify root causes of disease specific to each patient. A root cause of disease intuitively corresponds to an initial perturbation to a biological system that ultimately induces a diagnostic label (Figure <ref>). We can formulate this intuition mathematically by first introducing a binary label D labeling a sample with a diagnosis of a certain illness (D=1) or a healthy control (D=0). We assume that D is a terminal vertex such that ℙ(D|X,P) = logistic(Xβ_· D + Pγ_· D). The logistic function emphasizes that the diagnosis is a noisy label of the predictors X. We may likewise consider other functions for a binary target, such as the probit. We can associate an error term E_D to D but reserve the notation E for the error terms of X so that E_D ∉E. A root cause of disease for a specific patient then corresponds to a natural intervention on the error term of an ancestor of D. In particular, consider X_i = exp( Xβ_· i) E_i from Equation (<ref>) and suppose E_i = e̅_i for a healthy control. We can interpret each error term E_i ∈E as the combined effects of unobserved variables lying upstream of only X_i – such as the DNA sequence, acetylation or methylation status of a gene. An exogenous insult – such as a somatic mutation or toxin – then changes the value of E_i from e̅_i to an “unhealthy” one e_i. The change of E_i from e̅_i to e_i affects downstream variables, ultimately impacting variables involved in the diagnostic criteria and therefore the diagnosis D itself (Figure <ref>). We can quantify the change in probability of D using the following logarithmic odds: D_0 = ln( ℙ(D=1|X,P)/ℙ(D=0|X,P)) = Xβ_· D + Pγ_· D. The equation depends on the variables X∪P, but we would like to quantify the causal effect of each error term E_i ∈E on D for a specific patient. The above logarithmic odds of the logistic regression model of D on X∪P admits a linear form. However, the logarithmic odds of D on E for patient j, denoted by f^j(E), generally requires a non-linear function. We learn f^j(E) by performing non-linear logistic regression with the cells associated with patient j. Let v^j(W) correspond to the conditional expectation of the non-linear model 𝔼(f^j(E)|W) for some W⊆E∖ E_i. We can measure the change in probability when intervening on some E_i ∈E for patient j via the difference δ^j_E_i W = v^j(E_i,W)-v^j(W). We have δ^j_e_i w >0 when E_i = e_i increases the probability that D=1 because v^j(e_i,w) is larger than v^j(w). We do not a priori know which W to choose, so we average over all possible W⊆E∖ E_i as follows: S^j_i = 1/p∑_W⊆ (E∖ E_i)1/p-1|W|δ^j_E_i W. An instantiation of the above quantity corresponds precisely to the Shapley value of <cit.> which, as the reader may recall, is the only additive feature attribution measure satisfying the local accuracy, missingness and consistency desiderata. We can thus quantify the root causal contribution of E_i on D using S^j_i. Measurement error precludes recovery of the exact values of E and therefore of S^j_i. We instead compute the expected Shapley for patient j given by 𝔼(S^j_i|D=1) = Υ_i^j. This expected Shapley also satisfies the three desiderata by linearity of expectation: * Local accuracy: ∑_i=1^p Υ_i^j = 𝔼 [f^j(E)|D=1] - 𝔼 f^j(E); * Missingness: if E_i ∉E, then Υ_i^j = 0; * Consistency: We have Ϋ_i^j ≥Υ_i^j for any two models f̈^j and f^j where 𝔼(δ̈^j_E_iW|D=1) ≥𝔼(δ^j_E_iW|D=1) for all W⊆E∖ E_i. The first criterion ensures that the total score ∑_i=1^p Υ_i^j remains invariant to changes in the patient-specific disease prevalence rate 𝔼 f^j(E). The second criterion implies s_D = 0 because E_D ∉E. The first and third criteria together imply that each Υ_i^j is also invariant to changes in the disease prevalence rate, since we must have δ̈^j_E_iW≥δ^j_E_iW for all W⊆E∖ E_i and X_i ∈X. The three desiderata are therefore necessary. We now introduce the following definition: The root causal contribution of X_i for patient j is Υ_i^j. Similarly, X_i is a root cause of disease (D=1) for patient j if Υ_i^j > 0. X_i is not a root cause of disease for patient j if Υ_i^j ≤ 0 because E_i does not on average increase the probability that D=1 in this case. §.§ Root Causal Inference We estimate Υ_i^j for each variable X_i ∈X and patient P_j ∈P using the Root Causal Inference with Negative Binomials (RCI-NB) algorithm summarized in Algorithm <ref>. RCI-NB first runs RP in Line <ref> to estimate the coefficients β_·X and gamma distribution parameters (r,γ_·X) in order to simulate samples from p(E,X,P). The algorithm then obtains (β_· D, γ_· D) by regressing D on X∪P with the Logistic Regression Expectation Maximization (LR-EM) algorithm in Line <ref>. LR-EM proceeds just like NB-EM but with Line <ref> removed and Equation (<ref>) replaced by: (1/n∑_j=1^n d_jz_j - 1/s∑_j=1^s μ_j z_j/1+μ_j)⊙η - λ_n (β_· D, γ_· D - γ_· D) = 0, or the corresponding score equations for logistic regression. The recovered parameters in turn enable simulation of D_0. RCI-NB therefore non-linearly regresses D_0 on E for each patient j. We use XGBoost in this paper, so we can quickly compute the expected Shapley values for each patient using TreeSHAP in Line <ref> <cit.>. A Shapley oracle outputs the true expected Shapley for each patient given p(E,P,D_0). The RCI-NB algorithm is sound with oracle information: RCI-NB outputs the true expected Shapley values, or Υ_i^j for each variable X_i ∈X and patient j given regression, goodness of fit and Shapley oracles. RP recovers the parameters (β_·X, r, γ_·X) with negative binomial regression and goodness of fit oracles per Theorem <ref>. Similarly, RCI-NB recovers β_· D and γ_· D with a logistic regression oracle. Lines 2 and 4 therefore simulate samples from p(E,P,D_0) and recover 𝔼(S_i^j | D=1) with a Shapley oracle in Line <ref>. § EXPERIMENTS §.§ Algorithms We compared RCI-NB against the following four algorithms representing the state of the art in inference for patient-specific root causes of disease: * Root Causal Inference (RCI): an efficient top-down algorithm that infers patient-specific root causes assuming a linear model with non-Gaussian error terms <cit.>. * Independent Component Analysis (ICA): utilizes a general purpose ICA algorithm to extract the error term values also assuming a linear model with non-Gaussian error terms <cit.>. * Generalized Root Causal Inference with the Additive Noise Model (ANM): a bottom-up algorithm that generalizes RCI to non-linear models <cit.>. We equipped GRCI with ANM by solving for (β, γ) with NB-EM. We then subtracted out the conditional means to recover the error term values. * Generalized Root Causal Inference with the Heteroscedastic Noise Model (HNM): same as ANM, but we solved for both (β, r,γ) with NB-EM. We then subtracted out the conditional means and dividing by the conditional standard deviations to recover the error term values. We equipped RCI-NB, ANM and HNM with the same XGBoost TreeSHAP procedure for estimating the expected Shapley values with extracted error terms <cit.>. ANM and HNM both utilize NB-EM and LR-EM like RCI-NB. RCI and ICA use linear logistic regression models for inferring the expected Shapley values in accordance with their linearity assumption. No algorithm except RCI-NB takes measurement error into account. Reproducibility. All code needed to replicate the experimental results is available at https://github.com/ericstrobl/RCINB. §.§ Evaluation Criteria All of the above algorithms output an expected Shapley value for each patient and each variable. Moreover, the Shapley values involve a predictive model fit on the error terms. We therefore compared the outputs of the algorithms utilizing the root mean squared error (RMSE): √(1/qp∑_j=1^q ∑_i=1^p (Υ_i^j - Υ_i^j)^2), where lower is better. If an algorithm only estimates expected Shapley values for a subset of variables, then we set the values of the missing variables to zero. We computed the ground truth Shapley values Υ^j to negligible error by running XGBoost TreeSHAP on 100,000 ground truth values of (E, P, D_0). We also measured the average running time of each algorithm in seconds. §.§ Synthetic Data §.§.§ Data Generation We generated structural equation models obeying Equation (<ref>) as follows. We first generated DAGs with p=7 or p=12 variables in X and an expected neighborhood size of two. We created a random adjacency matrix by sampling from a Bernoulli(2/(p-1)) distribution in the upper triangular portion of the matrix. We introduced weights β and offsets γ by drawing values from the uniform distribution on [-1,-0.25] in order to stay within machine precision after exponentiation – except for the terminal vertex D whose incoming edges β_D were drawn from the uniform distribution on [-1,-0.25] ∪ [0.25,1]. We drew D randomly from the set of terminal vertices with at least one parent. We similarly set the shape parameters of each gamma distribution by drawing from a uniform distribution on [0.1, 1]. We drew C from a gamma distribution with shape and rate equal to one. We generated n=10,000 or 100,000 cell samples from 2 to 10 individuals in P again sampled uniformly. We repeated the above procedure 250 times and therefore generated a total of 250 × 2 × 2 = 1000 independent datasets. §.§.§ Results We summarize the results in Table 1. Bolded values denote the best performance per dimension and sample size. All bolded values are significant at a Bonferonni corrected threshold of 0.05/5 using paired t-tests, since we compared the performance of five algorithms. RCI-NB achieved the lowest mean RMSE across all dimension numbers and sample sizes. Moreover, the algorithm continued to improve with increasing sample sizes. The other algorithms all performed worse but comparably across dimensions; their performances also did not improve with increasing sample sizes. Accounting for Poisson measurement error thus steadily improves performance with more samples. RCI and ICA both completed within two seconds due to the computational efficiencies gained by assuming linearity. RCI-NB took significantly longer than both RCI and ICA but completed approximately two times as quickly as the alternative non-linear approaches ANM and HNM. We conclude that RCI-NB is slower than linear methods but faster than alternative non-linear ones. §.§ Real Data §.§.§ Lung Adenocarcinoma We evaluated the algorithms on their ability to discover the root causes of lung adenocarcinoma. The GSE-123904 scRNA-seq dataset of <cit.> contains RNA counts from 17,502 single cells derived from cancerous and normal adjacent tissue of three patients (IDs 675, 682 and 684). The mitogen-activated protein kinase (MAPK) pathway plays an important role in lung carcinogenesis <cit.>. KRAS and EGFR comprise the top driver genes in lung adenocarcinoma <cit.>. EGFR had nearly all zero counts in the data, so we included downstream genes GRB2, HRAS, ARAF and CCND1 instead. We added KRAS and TP53 as well. We do not have access to the ground truth expected Shapley values with real data. However, we can estimate them to high accuracy using the ground truth causal graph. We obtained the causal relations between the genes using the KEGG pathway of non-small cell lung cancer (HSA05223) and plot the pathway in Figure <ref> <cit.>. We estimated the ground truth Shapley values by (1) fitting negative binomial regression models using the sample versions of Equation (<ref>) on the ground truth parent set of each variable, (2) sampling from the de-noised gamma distributions and (3) running XGBoost TreeSHAP on the data of each patient. Recall that RCI-NB, ANM and HNM all use NB-EM, LR-EM and TreeSHAP. We plot the RMSE and timing results of the algorithms in Figure <ref> as averaged over 50 bootstrapped samples. RCI-NB achieved the lowest MSE by a large margin. The algorithms utilizing ANM and HNM achieved lower accuracy than the linear methods ICA and RCI. All non-linear algorithms took substantially longer than the linear ones, but RCI-NB still completed faster than both HNM and ANM. We conclude that RCI-NB achieves the highest accuracy and completes the fastest among the non-linear methods in this dataset. §.§.§ Cervical Carcinoma We next evaluated the ability of the algorithms to discover the root causes of cervical squamous cell carcinoma. We downloaded scRNA-seq data from E-MTAB-11948 used in <cit.>. The dataset contains 69,938 cells from cancerous and normal adjacent tissue of three patients with cervical cancer. PIK3CA is the most frequently mutated gene in cervical carcinoma <cit.>. All three patients also tested positive for HPV type 16 that produces oncoproteins E5, E6 and E7 known to effect EGFR and the PI3K signaling pathway <cit.>. The PI3K signaling pathway effects cell cycle progression via GSK3B and FOXO1 as well as cell survival via MDM2 and TP53 according to the HPV KEGG pathway (HSA05165). We plot the ground truth causal graph in Figure <ref>. We summarize the results in Figure <ref> as averaged over 50 bootstrapped draws. RCI-NB again achieved the lowest average RMSE by a large margin. The linear algorithms did not consistently outperform non-linear ANM and HNM. Instead, all algorithms besides RCI-NB performed comparably. RCI-NB took 202.7 seconds to complete on average, on-par with ANM and HNM in this case. We conclude that RCI-NB again achieves the highest accuracy with timing comparable to other non-linear algorithms. The real data results therefore mimic those seen with synthetic data. § CONCLUSION We presented a post non-linear SEM consisting of gamma distributed error terms and random variables corrupted by Poisson measurement error. We then showed that each variable admits a negative binomial distribution when conditioned on its parents and patient. We used this fact to derive novel regression and goodness of fit testing procedures that bypass Poisson measurement error. The test requires samples from the joint distribution of the parents, which we recovered using the top-down RCI-NB algorithm. Experimental results highlighted the superiority of RCI-NB in recovering the true root causal contributions – quantified using expected Shapley values – in both synthetic and real data. Future work could improve the scalability of the method and accommodate latent confounding not related to measurement error. § APPENDIX §.§ Proposition 1 Let θ=(α,r) and X_i = Y. Consider the derivative of the corrected log-likelihood given by 1/n∑_i=1^n S_i(θ) where θ = (α, r) and: S_i(α) = y_i z_i - 1/s∑_j=1^s μ_j z_j S_i(r) = ψ(y_i + r) - ψ(r) + ln(r) - 1/s∑_j=1^s ln(r + μ_j). The original uncorrected versions of the score equations correspond to: S_i^*(α) = y_i z_i - r + y_i/r + μ_iμ_i z_i S_i^*(r) = 1 -r + y_i/r + μ_i + ψ(y_i + r) - ψ(r) + ln(r) - ln(r + μ_i), The following conclusion holds: prop:normality (Asymptotic normality) Assume n →∞, s →∞ and n/s → 0. Further assume that Ω = Var(μZ, ln(r+μ)) and Σ = -𝔼 S^'(θ_0) are positive definite. Then √(n)(θ_n - θ_0) →𝒩(0,Σ^-1(J_1 + J_2 + J_3) Σ^-1). We can write: 1/√(n)∑_i=1^n S_i (θ) = 1/√(n)∑_i=1^n S^*_i (θ) + 1/√(n)∑_i=1^n A_i(θ) + √(n)/s∑_j=1^s B_j(θ), where: A_i(θ) = ( r + y_i/r + μ_iμ_i z_i - 𝔼μZ r + y_i/r + μ_i - 1 + ln(r + μ_i) - 𝔼ln(r + μ) ), and: B_j(θ) = ( 𝔼μZ - μ_j z_j 𝔼ln(r+μ) - ln(r+μ_j) ). We consider a compact neighborhood Q_ρ = {θ : |θ - θ_0| ≤ρ} for some ρ >0. We now invoke the integral form of the mean valued theorem <cit.>: 1/√(n)∑_i=1^n S_i (θ_n) = 1/√(n)∑_i=1^n S_i (θ_0) - √(n)(θ_n - θ_0)C_n, where C_n = - ∫_0^1 1/n∑_i=1^n S^'_i (θ_0 + u(θ_n - θ_0))  du. We have sup_θ∈ Q_ρ |1/n∑_i=1^n S_i(θ) - 𝔼_θ_0S(θ)| → 0 almost surely by the uniform strong law of large numbers with n →∞ and s →∞ <cit.>. We then invoke Theorem 2.1 in <cit.> to conclude that θ_n is a strongly consistent sequence satisfying ∑_i=1^n S_i (θ_n) = 0. Therefore 1/√(n)∑_i=1^n S_i (θ_0) = √(n)(θ_n - θ_0)C_n. We consider the right hand side of Equation (<ref>). We have: 1/√(n)∑_i=1^n S^*_i (θ_0) 𝒩(0,J_1), where J_1 = 𝔼 S^*(θ_0) S^*T(θ_0). We also have 1/√(n)∑_i=1^n A_i (θ_0) 𝒩(0,J_2), where J_2 = 𝔼 A(θ_0) A^T(θ_0). Thus: 1/√(n)∑_i=1^n S^*_i (θ_0) + A_i (θ_0) 𝒩(0,J_1+J_2 + J_3), where J_3 = 𝔼 A(θ_0)S^*T(θ_0) + 𝔼 S^*(θ_0)A^T(θ_0). We finally have: 1/√(s)∑_j=1^s B_j (θ_0) 𝒩(0,Ω), since Ω is positive definite. We invoke Slutsky's lemma so that: √(n)/s∑_j=1^s B_j (θ_0) = √(n/s)1/√(s)∑_j=1^s B_j (θ_0) 0, because n/s → 0. As a result: 1/√(n)∑_i=1^n S_i (θ_0) 𝒩(0,J_1 + J_2 + J_3). We next show that C_n →Σ almost surely. We let ε > 0. The function 𝔼_θ_0 S^'(θ) is continuous in θ. We can therefore identify a ρ >0 such that | θ - θ_0 | < ρ implies | 𝔼_θ_0 S^' (θ) + Σ | < ε/2. Again by the uniform strong law of large numbers, there exists an integer N such that the following holds with probability one for all n > N: sup_θ∈ Q_ρ| 1/n∑_i=1^n S_i^'(θ) - 𝔼_θ_0 S^'(θ) | < ε/2. Now assume N is so large such that for all n > N, we have | θ_n - θ_0 | < ρ. Hence, for all n > N, we have: | C_n - Σ | ≤∫_0^1 | 1/n∑_i=1^n S^'_i (θ_0 + u(θ_n - θ_0)) + Σ| du ≤ ∫_0^1 sup_θ∈ Q_ρ| 1/n∑_i=1^n S_i^'(θ) - 𝔼_θ_0 S^'(θ) | + | 𝔼_θ_0 S^'(θ) + Σ |  du < ε. We conclude that C_n →Σ almost surely because we chose ε arbitrarily. We now invoke Slutsky's lemma: √(n)(θ_n - θ_0) = C_n^-11/√(n)∑_i=1^n S_i(θ_0) 𝒩(0,Σ^-1(J_1 + J_2 + J_3) Σ^-1). §.§ Theorem 1 We consider the following overdispersion score: T(X_i, U) = w^2_iUVar(X_i | U) - w_i U𝔼(X_i | U), where w^-1_iU = 1 + 𝔼(X_i | U)/r and U⊆X∪P∪ C. The negative binomial of X_i conditional on U has mean 𝔼(X_i | U) and variance 𝔼(X_i | U) + 𝔼(X_i | U)^2/r. Thus T(X_i, U) = 0 in this case. Let U more specifically correspond to a subset of the non-descendants of X_i always including P∪ C. We have: If ℙ_X|P is causally minimal and X_i ∼Pois(X_i C) for each X_i ∈X, then T(X_i, U) = 0 if and only if Pa(X_i)⊆U. We can write the following sequence: T(X_i, U) = w^2_iUVar(X_i | U) - w_i U𝔼(X_i | U) (a)=w^2_iU[ Var(𝔼(X_i | Pa(X_i),P,C)|U) + 𝔼(Var(X_i|Pa(X_i),P,C)|U) - w^-1_i U𝔼(X_i|U)] (b)=w^2_iU[ Var(𝔼(X_i | Pa(X_i),P,C)|U) + 𝔼(𝔼(X_i|Pa(X_i),P,C)|U) +𝔼(𝔼(X_i|Pa(X_i),P,C)^2/r|U) - (1 + 𝔼(X_i | U)/r) 𝔼(X_i|U)] =w^2_iU[ Var(𝔼(X_i | Pa(X_i),P,C)|U) +𝔼(𝔼(X_i|Pa(X_i),P,C)^2/r|U) - 𝔼^2(X_i|U)/r] =w^2_iU (1+1/r) Var(𝔼(X_i | Pa(X_i),P,C)|U), where (a) follows from the variance decomposition formula, and (b) from the quadratic variance property of the negative binomial. For the backward direction, if Pa(X_i)⊆U, then Var(𝔼(X_i | Pa(X_i),P,C)|U) = 0, so T(X_i, U) = 0. For the forward direction, assume by contrapositive that U does not contain all members of Pa(X_i). Then Var(𝔼(X_i | Pa(X_i),P,C)|U) > 0 by causal minimality, so T(X_i, U) > 0. thm:identifiability If ℙ_X|P is causally minimal and X_i ∼Pois(X_i C) for each X_i ∈X, then RP recovers (β,r,γ) with regression and goodness of fit oracles. We prove the statement by induction. Base: suppose |X| = 1. Then X_i ∈X is a Poisson-gamma mixture and therefore a negative binomial. Hence, RP recovers (β_· i,r_i,γ_· i) in Line <ref>. Induction: suppose the conclusion holds with |X| = p. We need to prove the statement when |X∪X_i| = p+1. We have two situations: * Assume that A contains all of the parents of X_i and none of its descendants. Then X_i is a Poisson-gamma mixture given A∪P∪ C and hence a negative binomial. RP again recovers (β_· i,r_i,γ_· i) in Line <ref>. The algorithm then places X_i into A and removes it from X in Line <ref>. * Assume that A (a) does not contain all of the parents of X_i or (b) contains at least one of the descendants of X_i from X∖X_i (or both). For (a), assume for a contradiction that RP removes X_i from X in Line <ref>. Then S(X_i, A∪P∪ C)>0 by Lemma <ref> but this contradicts the fact that S(X_i, A∪P∪ C)=0 because X_i follows a negative binomial. Thus RP does not place X_i into A and remove it from X in Line <ref>. For (b), assume for a contradiction that A contains a descendant of X_i from X∖X_i. But then there exists at least one descendant X_j of X_i (X_j ≠X_i) whose parents are not all in A, and X_j was removed by RP in a previous iteration. We therefore arrive at a contradiction again by Lemma <ref>. We conclude that RP does not place X_i into A and remove it from X in Line <ref> in either case. The conclusion follows by the inductive hypothesis.
http://arxiv.org/abs/2307.05550v1
20230709072109
Exploring high scale seesaw models through a supersymmetric portal
[ "Yi Liu", "Stefano Moretti", "Harri Waltari" ]
hep-ph
[ "hep-ph" ]
Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping Kazuya Nishimura1 Ami Katanaya2 Shinichiro Chuma2 Ryoma Bise1 Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ========================================================================================== § INTRODUCTION Neutrino masses have been known to be non-zero for 25 years <cit.>. As they are so much smaller than all other Standard Model (SM) fermion masses, one usually assumes that they are generated by some kind of a seesaw mechanism <cit.>. The masses are still generated through the Higgs mechanism, but suppressed by a heavy seesaw particle, which can be a singlet neutrino (Type-I), a triplet of Higgs bosons (Type-II) or a triplet of exotic leptons (Type-III) (see Refs. <cit.> for reviews). The seesaw scale is a priori unknown. If the seesaw scale is around the Electro-Weak (EW) scale, one may be able to produce the seesaw particles directly at the Large Hadron Collider (LHC) <cit.>. One of the original ideas <cit.> was that the smallness of the neutrino masses could be related to the breaking of a Grand Unification Theory (GUT), i.e., the relevant Yukawa couplings would be of order unity and the seesaw scale somewhere around α M_GUT∼ 10^14 GeV. Such energy scales are obviously out of the reach of present and future colliders. Supersymmetry, the symmetry between fermions and bosons, is often a necessary ingredient in formulating models with large separations of scales. Due to the cancellation between the bosonic and fermionic loops, the separation of scales is radiatively stable <cit.>, once it has been generated by some dynamics. Thus in the supersymmetric framework, scalar masses would not get quadratic corrections proportional to the seesaw scale and an EW scale Higgs boson would not be unnatural even if the seesaw scale was close to the GUT scale. In the context of high scale seesaw models, supersymmetry has one remarkable property. The scalar potential, and especially its F-terms being of the form V=∑_i| ∂ W/∂φ_i|^2, leads to four-scalar interactions without the seesaw particle but with the seesaw couplings involved. If the couplings are of the order unity, they are among the largest ones in the model and could lead to observable consequences. For definiteness, let us consider the Type-I seesaw model, where the extra superpotential terms in addition to those of the Minimal Supersymmetric Standard Model (MSSM) are W=W_MSSM+ y^ν L· H_u N^c+M_NN^cN^c, where we assume y^ν∼ 1 and M_N∼ 10^14 GeV. When differentiating with respect to N^c, one gets the term ∑_k y^ν *_iky^ν_jkL̃^†_i· H_u^†L̃_j· H_u, involving only Higgs bosons and left-handed sleptons, which we assume to be at the TeV scale. If there are significant mass splittings between the sfermion generations, which could well be generated through Renormalisation Group Evolution (RGE) due to the large couplings, one might get processes like ν̃_i→ν̃_jh with a large Branching Ratio (BR). If the sneutrinos decay visibly, the decays can be distinguished from mono-Higgs signatures that could arise from dark matter <cit.>. Slepton decays with Higgs bosons in the final state could offer an indication of a high scale seesaw model and thus provide us a window to scales otherwise beyond our experimental reach. Our aim is to investigate how could one observe such slepton decay patterns involving Higgs bosons in seesaw models of Type-I and Type-III, which have a similar structure in terms of the TeV scale Lagrangian. Our paper is organised as follows. Higgs-slepton interactions are described in the next section, which is followed by a discussion of the production and decay modes relevant to our research. Our numerical analysis is introduced in the following section, after which we conclude. § HIGGS-SLEPTON INTERACTIONS IN SEESAW MODELS We shall now look at how the Higgs-slepton interactions arise from our seesaw models in some detail. In particular, we look at Type-I and Type-III seesaw models. Both have Yukawa couplings that connect the lepton and Higgs doublets to the seesaw particles, which form a singlet and triplet under SU(2). The superpotential of Type-I seesaw is given in Eq. (<ref>) and for Type-III seesaw it is W = W_MSSM + y^ν L Σ H_u + M_ΣTr(Σ^2), where L is the left-chiral lepton doublet and H_u = (H^+ , H^0)^T is the up-type Higgs doublet. The Σ is an antilepton (L=-1) chiral superfield which transforms as (1,3,0) under the SM gauge group SU(3)_c× SU(2)_L × U(1)_Y. The mass term for Σ violates lepton number by two units. The superfield Σ can be represented Σ = σ^iΣ^i= ( [ Σ^0/√(2) Σ^+; Σ^- -Σ^0/√(2) ]), Σ^± = Σ^1 ∓ iΣ^2/√(2), Σ^0 = Σ^3. The models look very similar in what comes to neutrino mass generation, both having a lepton and a Higgs doublet coupling to the companion neutrinos. The only difference is that the L and H_u superfields combine to a singlet in the case of Type-I and to a triplet in the case of Type-III seesaw. This difference between the two seesaw models leads to a difference in the scalar potential which contributes the processes that lead to slepton decays containing a Higgs boson. When we expand the neutrino Yukawa terms in the superpotential, we get W = y^ν_ij( e^-_iH_u^+-1/√(2)ν_i H_u^0)N^c_j +…, W = y^ν_ij( 1/√(2)e^-_iH_u^+Σ^0_j -ν_iΣ^-_jH_u^++1/√(2)e^-_iΣ^+_jH_u^0+1/2ν_iΣ^0_jH_u^0)+…, for Type-I and Type-III, respectively. Here we have included a factor of 1/√(2) into the definition of the neutral Higgs field. Differentiating with respect to the heavy seesaw fields leads to the scalar potentials V = ∑_k1/2y^ν_iky^ν *_jkν̃_iν̃^*_jH_u^0H_u^0 *+…, V = ∑_k1/4 y^ν_iky^ν *_jk(ν̃_iν̃^*_jH_u^0H_u^0 *+2ẽ^-_iẽ^+_jH_u^0H_u^0 *)+… , for Type-I and Type-III, respectively. Hence one in general gets Higgs interactions with sleptons that are non-diagonal in flavour space and, in the case of a high scale seesaw, have large couplings. After EW Symmetry Breaking (EWSB) we have ⟨ H_u^0⟩ = vsinβ (v=246 GeV), which generates a three-point coupling between sleptons and the SM-like Higgs. One may also note that in Type-III seesaw there is a non-flavour-diagonal coupling between charged sleptons and Higgs bosons, while there is no such coupling in the case of Type-I seesaw. As we discuss below, this leads to a stronger signal arising from Type-III than Type-I seesaw. We further notice that, while the usual D-terms of the scalar potential also contain large couplings between sneutrinos, charged sleptons and Higgs bosons, such couplings are always flavour-diagonal and cannot result in decays of the type ν̃_2→ν̃_1h, which is our smoking gun signature for high scale seesaw models. Besides the decay modes containing Higgs bosons, there are other decay channels and the visibility of the signal depends on the branching ratios. If the Lightest Supersymmetric Particle (LSP) is a higgsino-like neutralino and the gauginos are heavier than the sleptons, the decays of the left-handed sleptons arise from the superpotential term y^ℓ LH_dE^c, so one gets the decays ν̃→χ̃^±ℓ^∓ and ℓ̃^±→χ̃^0ℓ^±. These lead to partial widths Γ(ν̃_j→ℓ^±_jχ̃^∓_i) = |y^ℓ_jj|^2|U_i2|^2(m_ν̃^2-m_χ̃^2)^2/32π m_ν̃^3, Γ(ℓ̃^±_j→ℓ^±_jχ̃^0_i) = |y^ℓ_jj|^2|N_i3|^2(m_ℓ̃^2-m_χ̃^2)^2/16π m_ℓ̃^3, where U_i2 gives the higgsino component of the chargino (for our benchmarks |U_i2|≃ 1), N_i3 gives the down-type higgsino component of the neutralino (for our benchmarks |N_13|≃ 1/√(2)). If the soft slepton masses are not flavour diagonal, an appropriate linear combination of the leptonic Yukawas corresponding to the flavour composition of the sleptons must be used. If the LSP is a gaugino there are additional decay channels ν̃→νχ̃^0 and ℓ̃^±→χ̃^±ν (if winos are light) and the decay widths are propotional to g^2 instead of |y^ℓ|^2 and gaugino components instead of higgsino components. Since we have the hierarchy y^ℓ_11≪ y^ℓ_22≪ y^ℓ_33≪ g, the strength of our signal will depend on the nature of the light neutralinos and charginos and in the case of higgsinos, the flavour of the heavier sleptons. As the electron and muon Yukawas are so tiny, in practice the mixing between the gaugino and higgsino components will be significant for the overall decay widths of the sneutrinos and charged sleptons unless the gauginos are extremely heavy. We shall concentrate on the higgsino case, since as we shall see, already the tau Yukawa is so large that the signal containing Higgs bosons will have a too small branching ratio if stau is the heavy slepton that decays. Hence in all our benchmarks we make our gauginos heavier than the sleptons. § THE PRODUCTION AND DECAY MECHANISMS To study the high-scale seesaw signatures with Higgs bosons, we build some Benchmark Points (BPs) with m(ẽ^±)<m(μ̃^±)<m(τ̃^±) and mass splittings between generations larger than m_h≈ 125 GeV (the mass of the SM-like state h). As we shall see, this will be the limiting case, where we still can see a signal. If the second slepton (assuming the third one to be too heavy to be produced efficiently) would be a selectron, the signal would be similar (as the mixing with gauginos dominates the other decay modes already for smuons), while in the case of a stau, the signal would almost vanish due to the larger partial widths from equations (<ref>) and (<ref>). We consider the charged current process pp→ℓ̃_2^±ν̃_2, where the subscript indicates mass ordering. The charged current portal is more promising as the final state contains charged leptons even when the sneutrino decays invisibly. As discussed above, in Type-III seesaw both sneutrinos and charged sleptons can decay to final states with Higgs bosons. The dominant process is ℓ̃_2→ℓ̃_1 h while ν̃_2 →ℓ^±χ̃_1^∓, νχ̃^0. The Feynman diagram for such a process is shown in Fig. <ref>. There is also a process, where the Higgs originates from a sneutrino decay, but that has a smaller BR as can be seen from equation (<ref>). In Type-I seesaw, only the sneutrino can decay into a Higgs boson via ν̃_2 → h ν̃_1. The corresponding Feynman diagram is shown in Fig. <ref>. These processes can lead to a variety of final state topologies. Currently the limit for charged slepton masses is m(ẽ^±),m(μ̃^±)> 700 GeV for neutralino masses below 350 GeV <cit.>, which we take as our lower limit of charged slepton masses[With more compressed spectra m(ℓ̃)-m(χ̃^0)≲ 100 GeV, one obviously can have significantly lighter sleptons. Such cases need a different analysis strategy than the one adopted here as we rely on large E_T to suppress SM backgrounds.]. This means that the overall production rate of slepton-sneutrino pairs will be low, especially as we have to produce second generation sleptons with a large mass splitting compared to the first generation ones. In fact, the production rate at the LHC even with nominal collision energy (√(s)=14 TeV) is so low (∼ 30 ab for 1 TeV sleptons), that there will not be sufficient statistics even at the High-Luminosity LHC (HL-LHC) <cit.>. Hence we turn to the proposed High-Energy LHC (HE-LHC) <cit.> with a nominal collision energy of √(s)=27 TeV. This increases the production cross section by an order of magnitude compared to the standard LHC. In Tab. <ref> we show the lepton multiplicities for some typical benchmark points (BP1 and BP3, defined in Table <ref>). We see that the single lepton final state has the highest multiplicity for both seesaw models. As we will lose a part of the signal due to different BRs involved in the model, it is reasonable to look at the state with the highest multiplicity first. We also pick the Higgs decay mode to b-quarks as that has the highest BR and allows to reconstruct the Higgs boson, although not with a too high precision in mass. Unfortunately the channels with good mass resolution (i.e., γγ and ZZ^*→ 4 leptons) are too rare to be useful with such a small event rate. Our signal events will then consist of events with a single lepton, two b-tagged jets and missing momentum carried by the LSP. The largest SM backgrounds to this final state arise from the following processes: * tt̅ production where one the top (anti)quarks decays semileptonically and the other one hadronically; * W^±h production in the case where the W^± boson decays into a lepton and a neutrino. These have been considered to be the dominant backgrounds in similar types of experimental analyses (e.g., <cit.>). § SIMULATION AND RESULTS In this section we will describe our numerical toolbox and the Monte Carlo (MC) simulations that we have pursued with it. §.§ Analysis strategy The model files are produced by the Mathematica package Sarah v4.14 <cit.>. This code also generates a source code for Spheno v4.0.4 <cit.> to obtain the mass spectrum and couplings as well as for Madgraph5 v2.8.2 <cit.> to simulate collider events. We use Pythia v8.2 <cit.> for parton showering and hadronisation while we simulate the detector response by using Delphes3 <cit.>. We simulate the analysis and present our numerical results with Madanalysis5 v1.8 <cit.>. We prepare two BPs for Type-III seesaw and two for Type-I seesaw, which can be detected in the HE-LHC with 27 TeV collision energy and the integrated luminosity 10 ab^-1. We simulate proton-proton collisions to produce the second generation sneutrino (ν̃_2) and slepton (ℓ_2), which in our cases are smuon-like, and select decays to the SM-like Higgs boson plus corresponding first generation particles. The mass of ν̃_2 and ℓ_2 should be heavy enough to allow for the decay kinematics. At the same time, the mass of lightest slepton is required to be larger than 700 GeV <cit.>. The particle mass spectra and relevant BRs are shown in Tab. <ref>. All of the BPs have the same Lightest Supersymmetric Particle (LSP) and Next-to-LSP (NLSP), which are higgsino-like neutralinos and charginos. BP1 has a mass spectrum similar to BP3 and the same situation arises between BP2 and BP4. However, there is a significant difference in the Higgs production cross section times BRs between Type-III seesaw and Type-I seesaw. For the sneutrino decay process, Type-I seesaw has BRs larger than the Type-III ones, which can be traced back to the factors in equations (<ref>) and (<ref>). However, the charged slepton decay channel does not exist in Type-I seesaw whereas it dominates the Higgs signal in Type-III seesaw, consistent with equations (<ref>) and (<ref>). As the slepton masses increase, the BR shows a decreasing trend. The BR for μ̃^±→ẽ^±h is high in Type-III seesaw, since the competing decay mode of eq. (<ref>) is proportional to the small muon Yukawa coupling squared or the small gaugino-higgsino mixing factor squared. Had the second slepton been a selectron, the BR would have been similar as the gaugino-higgsino mixing would dominate the decays to neutralinos/charginos, while for staus the corresponding branching ratio is only a few percent as the tau Yukawa is large enough to dominate the branching ratio. As a pre-selection, we require a single lepton and at least two b-jets, as shown in Tab. <ref>. We use a working point, where the b-jet tagger achieves 70% efficiency and only a 1.5% probability of misidentifying a light-parton jet as a b-one <cit.>. Then several cuts are imposed to select the Higgs signal as per the process in Fig. <ref>. The leading lepton is dominantly produced from the process ν̃_1→ e + χ̃_1^±. As the mass difference between sneutrino and the lightest chargino is larger than 500 GeV for BP1 and 400 GeV for BP2, we choose the transverse momentum of the leading lepton to be larger than 400 GeV to preserve the single lepton signal and reduce the background, as shown in Fig. <ref>. The E_T (MET) cut is chosen to be 500 GeV as the NLSP mass is around that value. In order handle properly the MC generation of the tt̅ background, we add a cut at the generation level (MET above 300 GeV) so as to generate this SM process automatically in the signal region of interest. The Higgs selection is done by choosing the interval of invariant mass of the leading and next-to-leading b-jets from 100 GeV to 150 GeV. Fig. <ref> shows a peak around the SM-like Higgs mass for the signal and W^± h background, while the tt̅ noise is rather flat therein. Hence, this requirement proves effective against the latter. Finally, the 100 GeV cut on the transverse mass defined using the highest p_T lepton plus missing transverse momentum, M_T(l_1,E_T), can also significantly reduce background, especially tt̅, as evident from Fig. <ref>. §.§ Numerical analysis We have applied the cuts of Tab. <ref> to all BPs as well as backgrounds and the results are presented in Tab. <ref>, for the discussed HE-LHC energy and luminosity. As expected, Type-III seesaw preserves more signal events (25.8 for BP1 and 27.7 for BP2) than Type-I seesaw (15.5 for BP3 and 9.2 for BP4). Furthermore, BP2 and BP4 show the interesting feature of having fewer initial events (compared to BP1 and BP3, respectively) but displaying a similar final result. This is because the sneutrino and smuon in BP2(BP4) are heavier than those in BP1(BP3), leading to a larger MET and higher transverse momentum of the leading lepton (p_T(ℓ_1)), thereby increasing the efficiency of the corresponding selections. The significances are shown in Tab. <ref>, for the usual HE-LHC parameters, wherein one can appreciate rather significant signal excesses above the SM backgrounds for Type-III seesaw while for Type-I seesaw the sensitivity is somewhat limited (but larger values of Yukawa couplings could be probed and there could be room to improve the analysis or increase the amount of data). We also tested a benchmark similar to BP1, but with the mass ordering m(ẽ)<m(τ̃)<m(μ̃) with the smuon too heavy to be produced. This gave just 0.6 events after the cuts, so we can get a significant signal only arising from selectrons or smuons and their sneutrinos. In addition it is essential for our analysis that there is a significant mass splitting between the sleptons and the LSP. With a softer MET cut the tt background would be problematic, while the cut on the transverse mass of the lepton and MET would keep W^±h under control. In summary, though, it is clear that the HE-LHC is a machine with clear potential to access high scale seesaw models (like Type-III and Type-I embedded within the MSSM) by exploiting the SM-like Higgs (eventually decaying to bb̅) plus a hard lepton and MET signature. § CONCLUSIONS How neutrino mass generation occurs in Nature is one of the outstanding questions in particle physics. Current probes of neutrinos hardly include colliders, as herein such particles appear as E_T, thereby offering no scope to identify their properties. However, in a supersymmetric world, there exist sneutrinos, which share with neutrinos their interactions. Therefore, given that sneutrinos can decay visibly at the LHC (i.e., inside the detectors), it makes sense, in order to study neutrino properties in supersymmetry, to study sneutrinos. One, however, needs a paradigm for supersymmetry to do so, i.e., a model realisation of it, which we assumed here to be the MSSM, supplemented with two kinds of seesaw mechanism for (s)neutrino mass generation, the so-called Type-I and Type-III. These mechanisms have a similar structure to generate neutrino masses and hence both lead to Higgs-sneutrino interactions, which are non-diagonal in flavour space. These two are examples of high scale seesaw mechanisms, wherein the companion neutrinos (to the SM ones) can have masses of order 10^12-10^14 GeV. However, left-handed sneutrino and slepton masses are necessarily linked to the typical supersymmetry breaking scale, which ought to be 10 TeV or so at the most (in order to preserve gauge coupling unification, successful dynamical EWSB, etc.). In the case of a high seesaw scale the neutrino Yukawa couplings are among the largest ones in the model and, due to the structure of the supersymmetric scalar potential, they can lead to observable consequences at the supersymmetry breaking scale. We found that the current LHC, for which √(s)=14 TeV (in turn recalling that √(ŝ) is only a fraction of that), cannot test such seesaw scenarios. However, a possible energy upgrade has been proposed for it: the so-called HE-LHC. This offers √(s)=27 TeV (and ∫ L dt=10 ab^-1), therefore, it is in a position to test the aforementioned seesaw scenarios of neutrino mass generation. In this paper, we have, in particular, tested the scope of a particular signal stemming from these two seesaw mechanisms. In fact, the signature is common to both, i.e., charged current induced slepton-sneutrino production and subsequent decay into the SM-like Higgs boson (in turn decaying to bb̅ pairs), a single lepton (l=e,μ) and MET (or E_T). Upon assessing that the single lepton channel (as opposed to multi-lepton ones also stemming in these two scenarios) is the most sensitive one, for any number of b-jets beyond 1, we have devised a simple cut-and-count analysis, deployed identically for both Type-I and -III, that has enabled us to reach evidence to discovery significances at the HE-LHC for the Type-III case while for the Type-I case a more refined selection and/or additional data would be required. This was shown, in both cases, for BPs currently compliant with standard theoretical requirements as well as current experimental searches. Parameterwise, the signature requires the gauginos to be heavier than the sleptons, a sufficient mass splitting (≳ 300 GeV) between the sleptons and the higgsino-like LSP and a sufficient mass splitting between the slepton generations so that the decay with a Higgs boson is kinematically allowed. Even though this signal is common to the two seesaw models, the fact that in Type-I seesaw only sneutrinos have decay modes containing Higgs bosons, while for Type-III also charged sleptons have such decay channels allows us to distinguish the models. This distinction might be more difficult at a hadron collider but, if there was an electron-positron collider with sufficient collision energy, the pair production of charged sleptons above √(s)=2m_ℓ̃ would lead to an enhanced signal with Higgs bosons in case of Type-III, while no such an enhancement would be present in Type-I. As an outlook of our work, we would like to highlight that a Future Circular Collider in hadron-hadron mode (FCC-hh) <cit.>, running at √(s) values up to 100 TeV, will not improve the scope of the HE-LHC since, herein, background rates increase more that the signal ones that we pursued (although this may not be true for other channels not considered here). Altogether, we have shown that there exist cases where, in supersymmetric theories, it is possible to probe the neutrino mass generation mechanism through sneutrino phy­sics while the (seesaw) scale related to this mechanism is extremely high, roughly, up to 10^14 GeV. § ACKNOWLEDGEMENTS SM is supported in part through the NExT Institute and STFC Consolidated Grant No. ST/L000296/1. HW is supported by the Carl Trygger Foundation under grant No. CTS18:164. We finally acknowledge the use of the IRIDIS5 High-Perfor­mance Computing Facility and associated support services at the University of Southampton in the completion of this work. 99 Super-Kamiokande:1998kpq Y. Fukuda et al. [Super-Kamiokande], Phys. Rev. Lett. 81 (1998), 1562-1567 [arXiv:hep-ex/9807003 [hep-ex]]. Minkowski:1977sc P. Minkowski, Phys. Lett. B 67 (1977), 421. Konetschny:1977bn W. Konetschny and W. Kummer, Phys. Lett. B 70 (1977), 433. Gell-Mann:1979vob M. Gell-Mann, P. Ramond and R. Slansky, Conf. Proc. C 790927 (1979), 315 [arXiv:1306.4669 [hep-th]]. Mohapatra:1980yp R. N. Mohapatra and G. Senjanovic, Phys. Rev. D 23 (1981), 165-180. Foot:1988aq R. Foot, H. Lew, X. G. He and G. C. Joshi, Z. Phys. C 44 (1989), 441. Khalil:2022toi S. Khalil and S. Moretti, CRC Press, 2022, ISBN 978-1-138-33643-8. Moretti:2019ulc S. Moretti and S. Khalil, CRC Press, 2019, ISBN 978-0-367-87662-3. CMS:2017ybg A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 119 (2017) no.22, 221802 [arXiv:1708.07962 [hep-ex]]. CMS:2018jxx A. M. Sirunyan et al. [CMS], JHEP 01 (2019), 122 [arXiv:1806.10905 [hep-ex]]. ATLAS:2019kpx G. Aad et al. [ATLAS], JHEP 10 (2019), 265 [arXiv:1905.09787 [hep-ex]]. ATLAS:2020wop G. Aad et al. [ATLAS], Eur. Phys. J. C 81 (2021) no.3, 218 [arXiv:2008.07949 [hep-ex]]. Dimopoulos:1981zb S. Dimopoulos and H. Georgi, Nucl. Phys. B 193 (1981), 150. Petrov:2013nia A. A. Petrov and W. Shepherd, Phys. Lett. B 730 (2014), 178 [arXiv:1311.1511 [hep-ph]]. Berlin:2014cfa A. Berlin, T. Lin and L. T. Wang, JHEP 06 (2014), 078 [arXiv:1402.7074 [hep-ph]]. ATLAS:2019lff G. Aad et al. [ATLAS], Eur. Phys. J. C 80 (2020) no.2, 123 [arXiv:1908.08215 [hep-ex]]. Gianotti:2002xx F. Gianotti, M. L. Mangano, T. Virdee, S. Abdullin, G. Azuelos, A. Ball, D. Barberis, A. Belyaev, P. Bloch and M. Bosman, et al. Eur. Phys. J. C 39 (2005), 293 [arXiv:hep-ph/0204087 [hep-ph]]. FCC:2018bvk A. Abada et al. [FCC], Eur. Phys. J. ST 228 (2019) no.5, 1109. ATLAS:2022enb G. Aad et al. [ATLAS], JHEP 06 (2023), 016 [arXiv:2207.00230 [hep-ex]]. Staub:2015kfa F. Staub, Adv. High Energy Phys. 2015 (2015), 840780 [arXiv:1503.04200 [hep-ph]]. Porod:2003um W. Porod, Comput. Phys. Commun. 153 (2003), 275 [arXiv:hep-ph/0301101 [hep-ph]]. Porod:2011nf W. Porod and F. Staub, Comput. Phys. Commun. 183 (2012), 2458 [arXiv:1104.1573 [hep-ph]]. Alwall:2011uj J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, JHEP 06 (2011), 128 [arXiv:1106.0522 [hep-ph]]. Sjostrand:2014zea T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten, S. Mrenna, S. Prestel, C. O. Rasmussen and P. Z. Skands, Comput. Phys. Commun. 191 (2015), 159 [arXiv:1410.3012 [hep-ph]]. deFavereau:2013fsa J. de Favereau et al. [DELPHES 3], JHEP 02 (2014), 057 [arXiv:1307.6346 [hep-ex]]. Conte:2012fm E. Conte, B. Fuks and G. Serret, Comput. Phys. Commun. 184 (2013), 222 [arXiv:1206.1599 [hep-ph]]. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], PTEP 2022 (2022), 083C01 CMS:2012feb S. Chatrchyan et al. [CMS], JINST 8 (2013), P04013 [arXiv:1211.4462 [hep-ex]]. FCC:2018byv A. Abada et al. [FCC], Eur. Phys. J. C 79 (2019) no.6, 474.
http://arxiv.org/abs/2307.05982v1
20230712075755
Stability of wandering bumps for Hawkes processes interacting on the circle
[ "Zoé Agathe-Nerine" ]
math.PR
[ "math.PR" ]
[ Philipp H. Kindt August 12, 2023 ==================== We consider a population of Hawkes processes modeling the activity of N interacting neurons. The neurons are regularly positioned on the circle [-π, π], and the connectivity between neurons is given by a cosine kernel. The firing rate function is a sigmoid. The large population limit admits a locally stable manifold of stationary solutions. The main result of the paper concerns the long-time proximity of the synaptic voltage of the population to this manifold in polynomial times in N. We show in particular that the phase of the voltage along this manifold converges towards a Brownian motion on a time scale of order N. Keywords. Multivariate nonlinear Hawkes processes, Mean-field systems, Neural Field Equation, Spatially extended system, Stationary bumps. AMS Classification. 60F15, 60G55, 60K35, 44A35, 92B20. § INTRODUCTION §.§ Hawkes Processes and Neural Field Equation In the present paper we study the large time behavior of a population of interacting and spiking neurons indexed by i=1,⋯,N, N≥ 1, as the size of the population N tends to infinity. We model the activity of a neuron by a point process where each point represents the time of a spike: for i=1,⋯,N, Z_N,i(t) counts the number of spikes during the time interval [0,t] of the ith neuron of the population. Denoting λ_N,i(t) as the conditional intensity of Z_N,i at time t, that is 𝐏( Z_N,i jumps between (t,t+dt) |ℱ_t)= λ_N,i(t)dt, where ℱ_t:=σ( Z_N,i(s), s≤ t, 1≤ i≤ N), we want to account for the dependence of the activity of a neuron on the past of the whole population : the spike of one neuron can trigger other spikes. Hawkes processes are then a natural choice to emphasize this interdependency and we take here λ_N,i(t)=f_κ,ϱ( ρ(x_i)e^-t+2πN∑_j=1^N cos(x_i-x_j) ∫_0^t- e^-(t-s) dZ_N,j(s)), i=1,…,N. The neurons are located on the circle S=(-π,π] with positions (x_i)_1≤ i ≤ N regularly distributed, that is x_i=πN( 2i - N). We subdivide S into N intervals of length 2π/ N denoted by B_N,i=(x_i-1,x_i] for 1≤ i ≤ N, with x_0:=-π. The function f_κ,ϱ : ℝ⟶ℝ_+ models the synaptic integration of neuron i with respect to the input of the other neurons j in the population, modulated by the spatial kernel cos(x_i-x_j). It is chosen as a sigmoid with parameters (κ,ϱ), κ>0, ϱ∈ (0,1), that is f_κ,ϱ(u):=(1+e^-(u-ϱ)/κ)^-1. The function ρ :  S ⟶ℝ represents the initial inhomogeneous voltage of the population and leaks at rate 1. The exponential term e^-(t-s) in the integral in (<ref>) quantifies how a jump lying back t-s time units in the past affects the present (at time t) intensity: each neuron tends to forget progressively its past. The main object of interest of the paper is the synaptic voltage U_N,i(t)= ρ(x_i)e^-t+2πN∑_j=1^N cos(x_i-x_j) ∫_0^t e^-(t-s) dZ_N,j(s)=:ρ(x_i)e^-t + X_N,i(t), (i.e. λ_N,i(t)=f_κ,ϱ(U_N,i(t-))) and more precisely the random profile defined for all x∈ S by: U_N(t)(x):=∑_i=1^N U_N,i(t) 1_x∈ B_N,i. The specific form of (<ref>) originates from the so-called ring model introduced by <cit.>, modelling the activity of neurons in the visual cortex on a mesoscopic scale. Here each position x∈ S represents a prefered orientation for each neuron, see the biological works of <cit.> and the mathematical works of <cit.> amongst others. We are looking here at the microscopic counterpart of this model. It means that neurons that prefer close orientation tend to excitate each others, whereas neurons with opposite orientation inhibit each others. Making κ→ 0 in (<ref>), we see that f_κ, ϱ converges towards H_ϱ the Heaviside function H_ϱ(u)= 1_u≥ϱ. Hence for κ small, a neuron can spike only when it has a high potential with rate approximately 1, and with rate approximately 0 otherwise. This model (<ref>) is a specific case of a larger class of mean-field Hawkes processes for which one can write the intensity in the form λ_N,i(t)=μ_t(x_i)+f( v_t(x_i)+1N∑_j=1^N w_ij^(N)∫_0^t-h(t-s) dZ_N,j(s)), i=1,…,N. The current model (<ref>) corresponds to the choice h(t)=e^-t and w_ij^(N)=2πcos(x_i-x_j). In (<ref>), the neurons are placed in a spatial domain I endowed with ν a probability measure that describes the macroscopic distribution of the positions. The parameter function μ_t :  I ⟶ℝ_+ represents a spontaneous activity of the neuron at time t, v_t :  I ⟶ℝ a past activity, h is the memory kernel of the system, f : ℝ⟶ℝ^+ and w_ij^(N) represents the interaction between neurons i and j. For a suitable class of connectivity sequence (w_ij^(N)) that can be approximated by some macroscopic interaction kernel w(x,y) as N→∞ (see <cit.> for precise statements), a usual propagation of chaos result as N→∞ (see <cit.>, <cit.>, <cit.>) may be stated as follows: for fixed T>0, there exists some C(T)>0 such that sup_1≤ i ≤ N𝐄(sup_s∈ [0,T]| Z_N,i(s) - Z_i(s) |) ≤C(T)√(N), where the limiting process (Z_i, i=1,…, N) consists of independent copies of inhomogeneous Poisson process suitably coupled to Z_N,i with intensity (λ_t(x_i))_t≥ 0 solving λ_t(x)=μ_t(x)+f( v_t(x)+∫_I w(x,y) ∫_0^t h(t-s) λ_s(y)dsν(dy)) (see the above references for details on this coupling). Moreover, for the specific choice h(t)=e^-t, denoting the macroscopic potential of a neuron (the synaptic current) with position x at time t by u_t(x):=v_t(x)+∫_I w(x,y) ∫_0^t h(t-s) λ_s(y)dsν(dy), an easy computation (see <cit.>) gives that, when v_t(x)=ρ(x)e^-t, u solves the Neural Field Equation (NFE) ∂ u_t(x)∂ t=- u_t(x)+∫_I w(x,y)f(u_t(y))ν(dy), t≥ 0, with initial condition u_0=ρ. The NFE that first appears in <cit.> has been extensively studied in the literature, mostly from a phenomenological perspective <cit.>, and is an important example of macroscopic neural dynamics with non-local interactions (we refer to <cit.> for an extensive review on the subject). Let us mention here an important point: whereas the analysis of <cit.> requires the measure ν in (<ref>) to be a probability measure on I, the historical version of the NFE was originally studied when ν(dy)=dy is the Lebesgue measure on ℝ. In this last case, thanks to its translation invariance of the Lebesgue measure, one can show the existence of travelling waves solutions to (<ref>), see <cit.> for details. The same analysis when ν(dy)=dy is remplaced by a probability measure fails, as translation invariance of (<ref>) is then broken. In this respect, the present choice of I=S and ν(dy)= 1_[-π,π)/2πdy combines the two previous advantages: ν is a probability measure (hence the previous analysis when N→∞ applies) and translation invariance is preserved in the present periodic case. It can be shown (<cit.>) that (<ref>) exhibits localized patterns (wandering bumps) which are stationary pulse solutions. We are interested in this paper in the long time behavior of the microscopic system (<ref>) and its proximity to these wandering bumps. Before focusing on the microscopic scale, we say a few words on the behavior of the macroscopic system (<ref>)/(<ref>). In the pure mean-field case (when w_ij^(N)=1 for all i, j), the spatial dependency is no longer relevant and (<ref>) reduces to the scalar nonlinear convolution equation λ_t=μ_t+f(v_t+∫_0^t h(t-s)λ_s ds). An easy instance concerns the so-called linear case where f(x)=x, μ_t=μ and ν_t=0: in this situation the behavior of λ_t as t→∞ is well known. There is a phase transition (<cit.>) depending on the memory kernel h: when ‖ h ‖_1=∫_0^∞ h(t) dt<1 (the subcritical case), λ_tμ1-‖ h ‖_1, whereas when ‖ h ‖_1>1 (the supercritical case), λ_t∞. This phase transition was extended to the inhomogeneous case in <cit.> (and more especially where the interaction is made through the realisation of weighted random graphs), and the existence of such a phase transition now reads in terms of ‖ h ‖_1 r_∞<1 (then λ_t(x)→ℓ(x) the unique solution of ℓ(x)=μ(x)+ ∫_I w(x,y)‖ h ‖_1 ℓ(y)ν(dy)) and ‖ h ‖_1 r_∞>1 (then ‖λ_t‖_2→∞), where r_∞ is the spectral radius of the interaction operator T_W g(x)↦∫_I w(x, y)g(y)ν(dy). In the fully inhomogeneous case and nonlinear case (f no longer equal to Id), a sufficient condition for convergence of λ_t is given in <cit.>: whenever ‖ f' ‖_∞‖ h ‖_1 r_∞ <1, λ_t converges to ℓ as t→∞, ℓ being the unique solution to ℓ=μ+f(‖ h ‖_1 T_W ℓ). Note that the present model (<ref>) obviously does not satisfy (<ref>), as ‖ f'‖_∞ is very large (recall (<ref>): f is a sigmoid close to the Heaviside function). Understanding the longtime behavior of λ_t when (<ref>) does not hold may be a difficult task for general h. However the present model is sufficiently simple to be analyzed rigorously: as it was originally noted by <cit.>, the stationary points of (<ref>) when w is a cosine can be found by solving an appropriate fixed point relation (see (<ref>) below) and by invariance by translation, each fixed-point gives rise to a circle of stationary solutions to (<ref>). One part of the proof will be to show the local stability of these circles (extending the results of <cit.> when f is the Heaviside function). The main concern of the paper is to analyse the microscopic system (<ref>) on a long time scale. An issue common to all mean-field models (and their perturbations) is that there is, in general, no possibility to interchange the limits N→∞ and t→∞. Specifying to Hawkes processes, the constant C(T) in (<ref>) is of the form exp(CT), such that (<ref>) remains only relevant up to T ∼ c log N with c sufficiently small. In the linear subcritical case, C(T) is linear (C(T)=CT) so that the mean-field approximation remains relevant up to T = o( √(N)) (<cit.>). In a previous work <cit.>, we showed that, in the subcritical regime defined by (<ref>) with h(t)=e^-t, the macroscopic intensity (<ref>) converges to ℓ defined by (<ref>) and the microscopic intensity (<ref>) remains close to this limit up to polynomial times in N. Here, the main difference is that (<ref>) admits a manifold of stable stationary solutions parameterized by S, instead of a unique one. We show here that, with some initial condition close to this manifold, our microscopic process (<ref>) stays close to the manifold up to time horizons that are polynomial in N, and moreover the dynamics of the microscopic current follows a Brownian motion on the manifold. Organization of the paper The paper is organized as follows: after introducing some notations, we start in Section <ref> by introducing the precise mathematical set-up. In Section <ref>, we present the main results of our paper. Section <ref> is divided into three parts: in the first part <ref>, we present the deterministic dynamics of (<ref>) and the manifold of stationary solutions 𝒰 defined in (<ref>). In the second part we introduce two ways of defining some phase reduction along 𝒰, the variational phase (Proposition <ref>) and isochronal phase (Proposition <ref>). In the last part, Theorem <ref> ensures that if the system is close to 𝒰, it stays so for a long time, and with Theorem <ref>, we analyze the dynamics of the isochronal phase of U_N along 𝒰. Such dynamics are represented in the simulations of Figure <ref>. In Section <ref>, we explain how our paper is linked to the present litterature on the subject. In Section <ref>, we sketch the strategy of proof we follow. Section <ref> collects the proofs of the results of Sections <ref> and <ref>, Section <ref> concerns the proof of the proximity between U_N and 𝒰 seen in Theorem <ref> and Section <ref> is devoted to prove the diffusive behavior of U_N along 𝒰 seen in Theorem <ref>. Some technical estimates and computations are gathered in the appendix. Acknowledgments. This is a part of my PhD thesis. I would like to warmly thank my PhD supervisors Eric Luçon and Ellen Saada for introducing this subject, for their useful advices and for their encouragement and guidance. This research has been conducted within the FP2M federation (CNRS FR 2036), and is supported by ANR-19-CE40-0024 (CHAllenges in MAthematical NEuroscience) and ANR-19-CE40-0023 (Project PERISTOCH). I would also like to thank Christophe Poquet for pointing out a mistake in a previous version of the paper. §.§ Notations and definition §.§.§ Notations We denote by C_parameters a constant C>0 which only depends on the parameters inside the lower index. These constants can change from line to line or inside a same equation, and when it is not relevant, we just write C. For any d≥ 1, we denote by | x| and x · y the Euclidean norm and scalar product of x,y∈ℝ^d. For (E,𝒜,μ) a measured space, for a function g in L^p(E,μ) with p≥ 1, we write ‖ g ‖_E,μ,p:=( ∫_E | g |^p dμ)^1/p. When p=2, we denote by ⟨·,·⟩ the Hermitian scalar product in L^2(E,μ). Without ambiguity, we may omit the subscript (E,μ) or μ. For a real-valued bounded function g on a space E, we write ‖ g ‖ _∞ := ‖ g ‖ _E,∞=sup_x∈ E| g(x) |. For (E,d) a metric space, we denote by ‖ g ‖_lip = sup_x≠ y| g(x) - g(y) | / d(x,y) the Lipschitz seminorm of a real-valued function g on E. We denote by 𝒞(E,ℝ) the space of continuous functions from E to ℝ, and 𝒞_b(E,ℝ) the space of continuous bounded ones. For any T>0, we denote by 𝔻([0,T],E) the space of càdlàg (right continuous with left limits) functions defined on [0,T] and taking values in E. For any integer N≥ 1, we denote by 1, N the set {1,⋯,N}. For any h,k,l ∈ E, we denote by Dg(h)[k]∈ S the derivative of g:E→ F at h in the direction k, and similarly for second derivatives D^2g(h)[k,l]. §.§.§ Definition of the model We define now formally our process of interest. Definition <ref> follows a standard representation of point processes as thinning of independent Poisson measures, see <cit.>. Let (π_i(ds,dz))_1≤ i ≤ N be a sequence of i.i.d. Poisson random measures on ℝ_+×ℝ_+ with intensity measure dsdz. The multivariate counting process (Z_N,1(t),...,Z_N,N(t))_t≥ 0 defined by, for all t≥ 0 and i ∈ 1, N: Z_N,i(t) = ∫_0^t ∫_0^∞1_{z≤λ_N,i(s)}π_i(ds,dz), where λ_N,i is defined in (<ref>) is called a multivariate Hawkes process with set of parameters (N,κ,ϱ,ρ). It has been showed in several works (see e.g. <cit.> amongst others) that the process defined in (<ref>) is well posed in the following sense. For a fixed realisation of the family (π_i)_1≤ i ≤ N, there exists a pathwise unique multivariate Hawkes process (in the sense of Definition <ref>) such that for any T<∞, sup_t∈ [0,T]sup_1≤ i ≤ N𝐄[Z_N,i(t)] <∞. Proposition <ref> can be found in <cit.>. In our framework, the macroscopic intensity (<ref>) population limits is λ_t(x)=f_κ,ϱ( ρ(x)e^- t+∫_S cos(x-y) ∫_0^t e^-(t-s)λ_s(y)dsdy), and the neural field equation (<ref>) becomes ∂ u_t(x)∂ t=- u_t(x)+∫_S cos(x-y)f_κ,ϱ(u_t(y))dy. Let T>0. There exists a unique solution (u_t)_t∈ [0,T] in 𝒞_b(S, ℝ) to (<ref>) with initial condition u_0=ρ. Proposition <ref> can be found in <cit.>, and follows from a standard Grönwall estimate. We can then define the flow of (<ref>) by (t,g)↦ψ_t(g), that is the solution at time t of (<ref>) starting from g at t=0: ψ_t(g)(x)=e^-tg(x)+∫_0^t e^-(t-s)∫_S cos(x-y) f_κ,ϱ(ψ_s(g)(x))ds. § STABILITY OF WANDERING BUMPS FOR INTERACTING HAWKES PROCESSES §.§ Main results §.§.§ Stationary solutions to (<ref>) We are concerned here with the stationary solutions to (<ref>), that is u(x)= ∫_-π^πcos(x-y) f(u(y))dy. We follow a similar approach to <cit.>, see Appendix <ref>. For a general choice of f, if u is solution to (<ref>), then for any ϕ, x↦ u(x+ϕ) is also solution to (<ref>) by invariance of S. Expanding the cosine, (<ref>) becomes u(x)= cos(x) ∫_-π^πcos(y) f(u(y))dy + sin(x) ∫_-π^πsin(y)f(u(y))dy. By translation symmetry, with no loss of generality we can ask ∫_-π^πsin(y)f(u(y))dy=0 and solving (<ref>) means finding A≥ 0 such that A= ∫_S cos (y) f( A cos(y) )dy. As (<ref>) is invariant by translation, any A solution to (<ref>) gives rise to the set 𝒰_A:={x↦ Acos(x+ϕ), ϕ∈ [-π, π]} of stationary solutions to (<ref>). Recall (<ref>), when f=H_ϱ the Heaviside function with threshold ϱ, <cit.> and <cit.> showed that for ϱ∈ [-1,1], the unique solutions to (<ref>) are A=0, A_-(0)=√(1+ϱ) - √(1-ϱ) and A_+(0):=√(1+ϱ) + √(1-ϱ). This result is recalled in Appendix <ref>. One can show that the set 𝒰_A_-(0) is unstable whereas 𝒰_A(0) and 𝒰_0 are locally stable. In the following we focus on the largest fixed point A_+(0) which we rename for A(0) by convenience. Recall that in the paper, we are under the assumption that f=f_κ,ϱ defined in (<ref>) for a small fixed κ. As f_κ,ϱH_ϱ, our first result is that when κ is close enough to 0, we can still find a stationary solution to (<ref>) of the form u=A(κ)cos where A(κ) is also close to A(0). Assume ϱ∈ (-1,1). Then there exists κ_0>0 and a function A:(0,κ_0)→ (|ϱ|,+∞) of class C^1 such that for any κ∈ (0,κ_0), u=A(κ) cos is a stationary solution to (<ref>) when f=f_κ,ϱ and A(κ) A(0) given in (<ref>). Moreover, there exists κ_1∈ (0,κ_0) such that for any κ∈ (0,κ_1), 1<I(1,κ)<2 for I(1,κ):=∫_S f_κ,ϱ'(A(κ)cos(x))dx. Proposition <ref> is based on a simple implicit function argument and is proved in the Appendix <ref>. An illustration of this Proposition is done in Figure <ref>: we see that for each A solving (<ref>) for the Heaviside function, there is indeed another close A solving (<ref>) for the sigmoid function with small κ. For the rest of the paper we fix ϱ∈ (-1,1), κ<κ_1 and A=A(κ) and may omit the indexes (κ,ϱ). We have then established that 𝒰:=(Acos(·+ϕ))_ϕ∈ S=: (u_ϕ)_ϕ∈ S is a set of stationary solutions to (<ref>), which is a manifold parameterized by the circle S. To study the stability of these stationary solutions, we introduce linear operators that are also parameterized by the circle S. Let ϕ∈ S, and define for any function ψ∈ L^2(S) T_ϕψ(x) :=∫_S cos(x-y)f'(u_ϕ(y))ψ(y)dy ℒ_ϕψ :=-ψ + T_ϕψ. Define also L^2_ϕ:=L^2_f'(u_ϕ), that is the L^2 weighted space defined by the scalar product ⟨ g_1,g_2⟩_2,ϕ = ∫_S g_1(x)g_2(x)f '(u_ϕ(x))dx. We denote by ‖·‖_2,ϕ the associated norm. Recall (<ref>) and define v_ϕ:=∂_x u_ϕ=-A sin(·+ϕ). We consider also the orthogonal projection P_ϕ^∘ on Span(v_ϕ) and its complementary projection P_ϕ^⊥, both defined for any g∈ L^2_ϕ by P_ϕ^∘ g := ⟨ g,v_ϕ⟩_2,ϕ‖ v_ϕ‖_2,ϕ v_ϕ =: α_ϕ^∘(g) v_ϕ P_ϕ^⊥ g := g-P_ϕ^∘ g. We will also need the projection on Span(u_ϕ) hence we define α_ϕ^γ(g) =⟨ g,u_ϕ⟩_2,ϕ‖ u_ϕ‖_2,ϕ. Without ambiguity and for a general ϕ, we may write ‖·‖_ϕ instead of ‖·‖_2,ϕ to gain in clarity. Note that by compactness of S, since 0<inf_[-A,A] f' <sup_[-A,A]f'<∞, the norms ‖·‖_2 and ‖·‖_2,ϕ are equivalent: there exists C_0, C_0>0 (independent of ϕ) such that for any g∈ L^2(S), C_0 ‖ g ‖_2≤sup_ϕ∈ S‖ g ‖_2,ϕ≤ C_0 ‖ g ‖_2. Let ϕ∈ S. The operator ℒ_ϕ defined in (<ref>) is self-adjoint in L^2_ϕ and has three distinct eigenvalues, -1, 0 and γ∈ (-1, 0). If for ι∈{ -1, γ, 0}, we denote by ℰ_ι the eigenspace associated to the eigenvalue ι, one has that ℰ_ 0= Ker ℒ_ϕ= Span (v_ϕ), ℰ_γ= Span(u_ϕ) and ℰ_ -1= ( Span(u_ϕ, v_ϕ))^⊥. Moreover, ℰ_ 0⊥ ℰ_γ. Furthermore, there exists C_ℒ, C_P such that for any ϕ∈ S, ℒ_ϕ generates an analytic semigroup of contraction (e^tℒ_ϕ) and for any g∈ L^2_ϕ, t≥ 0, ‖ e^tℒ_ϕ P_ϕ^⊥ g ‖_2,ϕ ≤ e^tγ‖ P_ϕ^⊥ g‖_ϕ, ‖ e^tℒ_ϕg‖_2 ≤ C_ℒ‖ g ‖_2, ‖ e^tℒ_ϕP_ϕ^⊥ g ‖_2,ϕ ≤ C_P ‖ g ‖_2,ϕ. Proposition <ref> is proved in Section <ref>. A straightforward corollary of Proposition <ref> is the following The manifold 𝒰 is locally stable under the flow (<ref>): there exists ε_0>0 such that, for any g∈ L^2(S) satisfying dist_L^2(g,𝒰)≤ε_0, we have lim_t→∞dist_L^2(ψ_t(g),𝒰)=0 where ψ is defined in (<ref>). We denote by B(𝒰,ε_0):={g∈ L^2(I), dist_L^2(g,𝒰)≤ε_0}. §.§.§ Representation on the manifold Recall that we are interested in the behaviour of the process (<ref>), when the initial condition U_N(0) to (<ref>) is close to the manifold 𝒰 introduced in (<ref>). We need a way to define a proper phase reduction of U_N along 𝒰. We have two ways to do so that we use in our results that are well explained in the recent work <cit.>, which takes the NFE as a good class of examples and motivation. The first one is via the variational phase, defined in the following Proposition <ref>: There exists ϖ>0 such that, for any g∈ L^2(S) satisfying dist_L^2(S)(g,𝒰)≤ϖ, there exists a unique phase ϕ:=proj_𝒰(g)∈ S such that P_ϕ^∘ (g-u_ϕ)=0 and the mapping g↦proj_𝒰(g) is smooth. The second one is via the isochronal phase, defined in the following Proposition <ref>. In a few words, as the manifold 𝒰 is stable and attractive, a solution to the NFE from a neighborhood of 𝒰 is attracted to 𝒰 and converges to it. As t→∞, it identifies with one stationary solution of the manifold, we called it its isochron. For any g∈ B(𝒰,ε_0) (see Corollary <ref>), there exists a unique θ(g) ∈ S such that ‖ψ_t(g)-u_θ(g)‖_20, where ψ is defined in (<ref>). Such a map θ:B(𝒰,ε_0)→ S is called the isochronal map of 𝒰, and θ(g) is the isochronal phase of g. Moreover, it is three times continuously Fréchet differentiable (in fact C^∞), and in particular for u_ϕ∈𝒰, h,l∈ L^2(S), we have Dθ(u_ϕ)[h]= ⟨ v_ϕ,h⟩_ϕ‖ v_ϕ‖_ϕ, and D^2θ(u_ϕ)[h,l] = 12A^2( α_ϕ^∘(h)β_ϕ(v_ϕ,l) +α_ϕ^∘(l)β_ϕ(v_ϕ,h)+β_ϕ(h,l)) + 1+ γ/ 2A^ 2(1- γ)( α_ϕ^γ(h)β_ϕ(u_ϕ,l) +α_ϕ^γ(l)β_ϕ(u_ϕ,h)) - (2- γ)(1+ γ)/ 2(1-γ)( α_ϕ^∘(h)α_ϕ^∘(l)+α_ϕ^γ(h)α_ϕ^γ(l)), where α_ϕ^∘ and α_ϕ^γ are respectively defined in (<ref>) and (<ref>), and β_ϕ(h,l) :=∫_S f”(u_ϕ(y))v_ϕ(y) h(y)l(y)dy. Note that in particular, as u_θ(g)∈𝒰 and 𝒰 consists in stationary points, ψ_t(u_θ(g))=u_θ(g). Propositions <ref> and <ref> are proved in Section <ref>. §.§.§ Long time behavior The first result uses the variational phase to ensure that (U_N(t)) defined in (<ref>) reaches a neighborhood of 𝒰 in time of order log(N) and stays inside it for arbitrary polynomial times in N. Suppose that ρ∈ B(𝒰,ε_0) and ‖ U_N(0)-ρ‖_20. Let α,τ_f>0. There exists some C>0 such that, defining for any N≥ 1, T_0(N):=Clog(N), for any ε>0, 𝐏(sup_t∈ [T_0(N),N^ατ_f]dist_L^2( U_N(t),𝒰) ≤ε) 1. In fact, we show a more precise result than (<ref>) that will be useful for the proof of Theorem <ref>: we prove that for any fixed η∈ (0,1/4), we have with some constant C>0 𝐏(sup_t∈ [T_0(N),N^ατ_f]dist_L^2( U_N(t),𝒰) ≤ CN^η-1/2) 1. Theorem <ref> is proved in Section <ref>. The second main result of the paper is the analysis of the behavior of U_N along 𝒰 when α=1. Let ρ∈ B(𝒰,ε_0). Suppose <ref>. Let τ_f>0. There exist a deterministic θ_0∈ S and for every N some τ_0(N)∝log(N)N and a càdlàg process (W_N(t))_t∈ (τ_0(N),τ_f) that converges weakly in 𝔻([0,τ_f],S) towards a standard Brownian such that for every ε>0, lim_N→∞𝐏(sup_τ∈ (τ_0(N),τ_f)‖ U_N(Nτ) - u_θ_0 + σ W_N(τ)‖_2≤ε)=1, where σ:=( 2π∫_S sin^2(x)f(Acos(x))dx)^1/2, with A=A(κ) defined with Proposition <ref>. Theorem <ref> is proved in Section <ref>. We have run several simulations to illustrate our results, seen in Figure <ref>. We represent the evolution of the current U_N(t,x) for t∈ [0,T_max] where the time is on the x-axis and spatial position on the y-axis. The different values taken are scaled with a color bar. We can see the wandering bumps evolving in Figure <ref>, whereas in Figure <ref> the initialization is too far from the manifold and the system is no longer attracted to 𝒰. §.§ Link with the literature Hawkes processes have been introduced in <cit.> to model earthquakes and have been thoroughly studied since, see e.g. <cit.>. The seminal work of <cit.> has renewed the interest for large population of interacting Hawkes processes, which have proven to be particularly useful in a neuroscience context to model the mutually exciting properties of a population of neurons, see for instance <cit.>. In this respect, a common setting for the modelling of interacting neurons is the mean-field framework. For instance, in <cit.>, the authors describe the propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. Another popular model is the integrate-and-fire dynamics, first introduced in the seminal work of Lapique <cit.>, and still studied mathematically, as e.g. in <cit.> and also <cit.>. Several works have extended the mean-field framework to take into account the presence of a macroscopic spatial structure in the interaction, originally for diffusion models (see <cit.>), as well as for Hawkes processes (see <cit.>). The main difficulty with this extension is that we lose the exchangeability specific to homogeneous mean-field models as in <cit.>. Concerning our present model, <cit.> was the first to provide with a rigorous mesoscopic interpretation of the neural field equation (<ref>) in terms of the limit of spatially extended Hawkes processes interacting through a mesoscopic spatial kernel. The recent work <cit.> extend this result for Hawkes processes interacting on inhomogeneous random graphs. Another possiblity to circumvent the exchangeability issue would have been to use replica mean-field models as <cit.> and describe the propagation of chaos for an infinite number of replicas. Note however that this description keeps the size N of the population fixed, whereas we want to have N→∞. Note also that the present model include interaction that may be negative: this reflects some inhibitive effect among neurons with opposite orientations. Modelling the inhibition present in the brain has been historically difficult. For Hawkes processes, a common approach is to allow the synaptic kernel h in (<ref>) to take negative values. This is however impossible for linear Hawkes processes as the intensity cannot be negative. To circumvent this, one has to choose a non-negative and nonlinear function f to preserve the non-negativity of the intensity. A classic choice is to take f(x)=max(0,μ+x) (see for instance <cit.> for estimation model or <cit.> with h in (<ref>) signed and with compact support). One can also introduce inhibition through a signed multiplying factor (that may depend or not on the neuron), see for instance <cit.>. Some works have also parted the whole population into two subclasses of neurons, the excitatory ones and the inhibitory ones <cit.>. In the latter, the inhibition is made thanks to a (small) multiplicative factor onto the intensity of the excitatory population. The present work is another contribution concerning models with inhibition, as it is present thanks to the cosine interaction kernel that takes negative values. This choice is essential to our dynamics as the balance between excitation and inhibition within the population of neurons allows to have a stable manifold of stationary solutions to (<ref>). The analysis of mean-field interacting processes on long time scales has a significant history in the case of interacting diffusions, in particular in the case of phase oscillators as the Kuramoto model <cit.> (see <cit.> and references therein for a comprehensive review on the subject). The techniques used in the present work have some formal similarities to the ones used for diffusions, the main difference being that with Hawkes processes, the noise is Poissonnian (rather Brownian) and multiplicative (rather than additive). The so-called uniform propagation of chaos concerns situations where estimates such as (<ref>) are uniform in time. Such estimates are commonly met in reversible situations (e.g. granular type media diffusions <cit.>). See also the recent paper of <cit.>, where the authors studies a uniform propagation of chaos on the FitzHugh-Nagumo diffusive model. Let us comment on the analysis of the Kuramoto model as it presents some informal proximity with our model. One is here interested in the longtime behavior of the empirical measure μ_ N, t:= 1/ N∑_ i=1^ Nδ_θ_ i, t of the system of interacting diffusions (θ_ 1, …, θ_ N) solving the system of coupled SDEs dθ_ i,t= - K/ N∑_ j=1^ Nsin( θ_ i,t- θ_ j,t)d t + dB_ i, t, with (B_i) i.i.d. Brownian motions. Standard propagation of chaos techniques show that μ_N converges weakly on a bounded time interval [0, T] to the solution μ_ t to the nonlinear Fokker-Planck (NFP) equation ∂_t μ_t = 1/2∂_θ^ 2μ_t+K∂_θ( μ_t(sin * μ_t)), (to compare with our microscopic current U_N,i in (<ref>) converging towards u_t solution to the NFE (<ref>)). One can easily prove the existence of a phase transition for (<ref>): when K≤ 1, μ≡ 1/ 2π is the only (stable) stationary point of (<ref>) (subcritical case), whereas it coexists with a stable circle of synchronised profiles when K>1 (supercritical case). A series of papers have analysed the longtime behavior of the empirical measure μ_N of the Kuramoto model (and extensions) in both the subcritical and supercritical cases, the first one being <cit.>, followed by <cit.>. The main arguments of the mentioned papers lie in a careful analysis of two contradictory phenomena that arise on a long-time scale: the stability of the deterministic dynamics around stationary points (that forces μ_ N to remain in a small neighborhood of these points) and the presence of noise in the microscopic system (which makes μ_ N diffuse around these points). We are here in a similar situation to the supercritical case: the deterministic dynamics of the spatial profile U_ N (given by (<ref>)) has a stationary manifold 𝒰 (defined in (<ref>)) which possesses sufficient stability properties, see Corollary <ref>. The point of the analysis relies then on a time discretization and some careful control on the diffusive influence of noise that competes with the deterministic dynamics. In a previous work <cit.>, we have analysed in depth the case where (<ref>) has a unique solution, that would be comparable to the subcritical case of the Kuramoto model. The first main result of the paper is to show that once U_N(0) is close to the stationary manifold 𝒰, it stays so for a long time, see Theorem <ref>. The next step is to find a way to describe the projection of the dynamics onto 𝒰. A convenient tool for this is the use of isochronicity, we refer to <cit.> for a precise approach on the subject, and to <cit.> for their use of isochronicity to study the proximity between the noisy trajectory of interacting particles and the limit cycle in a finite dimensional setting. See also <cit.> where the microscopic system is a diffusion and the large population limit admits a stable periodic solution: they show that the empirical measure stays close to the periodic solution with a random dephasing. The isochron map in this case helps to describe the dephasing as a Brownian motion with a constant drift. Going back to Hawkes processes, several other works have already complemented the propagation of chaos result mentioned in (<ref>) and studied finite approximations of the NFE, mostly at the level of fluctuations. Central Limit Theorems (CLT) have been obtained in <cit.> for homogeneous mean-field Hawkes processes (when both time and N go to infinity) or with age-dependence in <cit.>. One should also mention the functional fluctuation result recently obtained in <cit.>, also in a pure mean-field setting. A result closer to our case with spatial extension is <cit.>, where a functional CLT is obtained for the spatial profile U_ N around its limit. Note here that all of these works provide approximation results of quantities such that λ_ N or U_ N that are either valid on a bounded time interval [0, T] or under strict growth condition on T (see in particular the condition T/ N→ 0 for the CLT in <cit.>), whereas we are here concerned with time-scales that grow polynomially with N. Another alternative to study large time behavior is to use a Brownian approximation of the dynamics of U_ N, see the initial work of <cit.>. However this approximation is based on the comparison of the corresponding semigroups and is not uniform in time. Nevertheless, let us comment on this diffusive approximation in large population regime on bounded time intervals that can be found in both <cit.>. A second order approximation of the NFE was proposed in <cit.> with (adapted to the notations of the present article) dU_N(t)=-U_N(t)dt + w∗ f(U_N(t))dt + C∫_S w(x,y) √(f(U_N(t)(y)))/√(N)W(dt,dy), where W is a Gaussian white noise. This approximating diffusion process (<ref>) is a noisy NFE, it can be seen as an intermediate modeling between the microscopic scale given by the Hawkes process and the macroscopic scale given by the NFE. In our framework with a cosine kernel, the infinitesimal increment of the noise in (<ref>) can be expanded as C cos(x) ∫_S cos(y) √(f(U_N(t)(y)))/√(N)W(dt,dy) + C sin(x) ∫_S sin(y) √(f(U_N(t)(y)))/√(N)W(dt,dy). To compare with our result, let us informally project the last quantity on Ker(ℒ_0) introduced in Proposition <ref>. The scalar product ⟨·, v_0⟩_2,0 with v_0=-Asin(·) gives that the cosine term becomes zero and the noise left is a random variable of the form -CA∫_S sin^2 f'(Acos) ∫_S sin (y) √(f(U_N(t)(y)))/√(N)W(dt,dy)=-C∫_S sin (y) √(f(U_N(t)(y)))/√(N)W(dt,dy) using (<ref>). The infinitesimal noise that effectively drives the dynamics of (<ref>) along 𝒰 is then Gaussian with variance proportional to ∫_S sin^2 (y) f(U_N(t)(y))/Ndydt which is exactly the variance found in (<ref>), rescaled by 1/N and where U_N(t) has been replaced by the limit u_t. This analogy remains informal, but shows that our results are compatible to the computations of <cit.> and <cit.>: one could see the present result as a rigorous justification that the approximation introduced by <cit.> can be extended for polynomial times in N. Approximation between Hawkes and Brownian dynamics has also been studied in <cit.>, based on Komlós, Major and Tusnády (KMT) coupling techniques (see <cit.>). Recently, Prodhomme <cit.> used similar KMT coupling techniques applied to finite dimensional Markov chains and found Gaussian approximation to remain precise for very large periods of time. However these results are valid for ℤ^d-valued continous-time Markov chains, it is unclear how they can be applied in our situation (with infinite dimension and space extension). The proof we propose is direct and does not rely on such Brownian coupling. The question of Stochastic Neural Field Equations has also been considered directly from a macroscopic perspective at multiple times. It consists in considering the NFE (<ref>) with an additive or multiplicative spatio-temporal noise, see for instance <cit.>. Existence and uniqueness results have been obtained for various expressions of the noise, see <cit.>. Let us mention in particular <cit.> who propose a heuristical derivation of the diffusion coefficient of the wandering bumps in a setting similar to ours (the ring model with f the Heaviside function). See also <cit.> where the author studies the effect of the added noise on patterns such that traveling waves and oscillations thanks to the use of some projection of the dynamics, to obtain long time stability. Whereas all of the previous results are concerned with a macroscopic approach concerning stochastic perturbation of the NFE, we provide here a rigorous and microscopic interpretation of this phenomenon. §.§ Strategy of proof of the long time behavior §.§.§ About Theorem <ref> Section <ref> is devoted to prove the proximity result of Theorem <ref>. This in particular requires some spectral estimates on the operators ℒ_ϕ introduced in Definition <ref> and the stability of stationary solutions to (<ref>), results that are gathered in Section <ref> and proved in Section <ref>. The main lines of proof for Theorem <ref> are given in Section <ref>. The strategy of proof is sketched here, and follows the one used in a previous work <cit.>. First we show in Proposition <ref> that one can find some initial time T_0(N)∝log(N) for which dist_L^2( U_N(T_0(N)),𝒰) ≤N^2η√(N), with 0<η<1/4. This essentially boils down to following the predominant deterministic dynamics of the NFE. Let T_f(N)=N^α, we discretize the interval of interest [T_0(N),T_f(N)] into n_f intervals of same length T denoted by [T_i, T_i+1], T chosen sufficiently large below. On each subinterval, we can decompose the dynamics of U_N(t) in terms of, at first order, the linearized dynamics of (<ref>) around any stationary solution, modulo some drift terms coming from the mean-field approximation, some noise term coming from the underlying Poisson measure, and some quadratic remaining error coming from the nonlinearity of f. It gives a semimartingale decomposition of U_N(t)- u_proj(U_N(T_i)) for t∈ [T_i,T_i+1], detailed in Section <ref>. Provided one has some sufficent control on each of these terms in the semimartingale expansion on a bounded time interval, we do an iterative procedure that works as follows: the point is to see that provided U_N is initially close to u_proj(U_N(T_i))∈𝒰, it will remain close to it for a time interval of length T for some sufficiently large deterministic T>0 so that the deterministic dynamics prevails upon the other contributions. The time horizon at which one can pursue this recursion is controlled by moment estimates on the noise in Proposition <ref>. §.§.§ About Theorem <ref> Section <ref> is devoted to prove the analysis of the behavior of U_N along 𝒰 seen in Theorem <ref>. We sketch here the strategy of proof. First we use the semimartingale decomposition of U_N dU_N(t)=B_N(t)dt + dM_N(t) (with B_N some drift and M_N a martingale defined in (<ref>)) and Itô formula to write the semimartingale decomposition of θ(U_N(t)) on the interval [T_0(N), Nτ_f]. As in Theorem <ref>, one can show a careful control on each of the terms appearing in the semimartingale decomposition, as done in Section <ref>. The difficulty here is to show rigorously that there is no macroscopic drift appearing on this time scale (this point is essentially due to the invariance by rotation of the whole problem). After rescaling the time by N, we identify the noise with a Brownian motion thanks to Aldous' tightness criterion and Lévy's characterization so that the result of Theorem <ref> follows. §.§.§ Extensions On the interaction kernel Note that Theorem <ref> is of local nature: stability holds provided the initial condition ρ is sufficiently close to 𝒰. Following <cit.>, it would be possible to consider the more general interaction kernel w(x,y)=∑_k=0^n A_kcos ( k (x-y)), with more that one Fourier mode. The fixed point equation (<ref>) becomes a more complicated system of equations A_k=∫_S cos(kx) f( ∑_k=0^n A_k cos(kx) ) dx. The exact number of solutions to (<ref>) remain unclear but if one can solve (<ref>) and show local stability of the solutions u_ϕ(x)=∑_k=0^N A_k cos(k(x+ϕ)), the same strategy should apply: we would obtain local stability provided one starts sufficiently close to these structures. Oscillatory behavior Note that 𝒰 consists of stationary points. We claim that a similar strategy should apply also to situations where (<ref>) admits generic oscillations, see <cit.> in a context of diffusion. We have in particular in mind the framework proposed in <cit.>: the authors study interacting Hawkes processes with Erlang memory kernel. The population is divided into classes, and the classes interact with a cycling feedback system, so that the large population limit is attracted to non-constant periodic orbits. It is reasonable to think that our techniques can be transposed to this situation, to show that the microscopic system is closed to the limit cycle under their hypotheses in large times and without using the approximating diffusion process. § STATIONARY SOLUTIONS (PROOFS) Let us first define for any function r∈ L^2(S) ℐ(r):=∫_S r(y)f '(u_0(y))dy, where u_0 is defined in (<ref>). We start by giving a computation Lemma that will be useful in the whole paper. We have ℐ(sin^2)=1, ℐ(cos^2)=ℐ(1)-1 and ℐ(sincos)=0. Recall that u_0=Acos, as A solves (<ref>) by integrating by parts we obtain A= ∫_Scos (y) f ( A cos(y) )dy=A ∫_S sin^2(y) f '( u_0(y) )dy=A ℐ(sin^2), and as A >0 it implies ℐ(sin^2)=1. By integrating by parts we also have -A ℐ(cossin)=∫_-π^πsin(y)f(Acos(y))dy. Since y→sin(y)f(Acos(y)) is odd, we obtain that ℐ(cossin)=0. As cos^2= 1-sin^2 and ℐ is linear, we have ℐ(cos^2)= ℐ(1)-ℐ(sin^2)=ℐ(1)-1. §.§ Stability Here we prove Proposition <ref>. Let ϕ∈ S. Let us first show that the operator ℒ_ϕ is indeed self-adjoint in L^2_ϕ. Let g_1,g_2∈ L^2_ϕ, we have by Fubini's theorem and recalling Definition <ref> ⟨ℒ_ϕg_1,g_2⟩_ϕ = -∫_S g_1 g_2 f '(u_ϕ) + ∫_S ( ∫_S cos(x-y)f '(u_ϕ(y))g_1(y)dy)g_2(x)f '(u_ϕ(x))dx = -∫_S g_1 g_2 f '(u_ϕ) + ∫_S f '(u_ϕ(y))g_1(y) ( ∫_S cos(x-y)g_2(x)f '(u_ϕ(x))dx)dy =⟨ g_1,ℒ_ϕ g_2⟩_ϕ, hence ℒ_ϕ is self-adjoint in L^2_ϕ. We focus now on its spectrum, we want to prove that it has three distinct eigenvalues, -1, 0 and γ∈(-1,0). The following arguments follow the same procedure of the one that can be found in <cit.>. First note that T_ϕ is compact in L^2_ϕ (in fact, with finite range). Hence it has a discrete spectrum consisting of eigenvalues. Let λ be an eigenvalue of ℒ_ϕ and ψ an associated eigenvector, that is ℒ_ϕψ = λψ hence (λ+1)ψ = T_ϕψ with Definition <ref>. As seen in Remark <ref>, λ does not depend on ϕ and if ψ is an eigenvector for ϕ=0, then ψ(·-ϕ) is an eigenvector for ϕ. Hence, in the following, we focus on the case ϕ=0. We have T_0ψ(x)=A_0(ψ)cos(x) + B_0(ψ)sin(x), with A_0(ψ):= ∫_S cos(y)f '( u_0(y))ψ(y)dy, B_0(ψ) := ∫_S sin(y)f '( u_0(y))ψ(y)dy. The eigenvalue -1 is spanned by functions ψ∈ L^2 such that A_0(ψ)=B_0(ψ)=0. Recall (<ref>), we have that, since (λ+1)ψ=T_0ψ, (λ+1) A_0(ψ) = ∫_S cos(y) (λ+1)ψ(y) f '(u_0(y))dy = ∫_S cos(y) (A_0(ψ)cos(y)+B_0(ψ)sin(y)) f '(u_0(y))dy = A_0(ψ)ℐ(cos^2) + B_0(ψ) ℐ(sincos), and similarly, (λ+1)B_0(ψ)= A_0(ψ) ℐ(sincos) + B_0(ψ) ℐ(sin^2). See Lemma <ref> for the computations of ℐ(cos^2), ℐ(sin^2) and ℐ(sincos). Putting these computations into (<ref>) and (<ref>) implies that (λ,ψ) solves ℒ_0ψ=λψ if and only if {[ (λ +1) A_0(ψ) = (ℐ(1)-1) A_0(ψ); (λ +1) B_0(ψ) = B_0(ψ). ]. Recall that with no loss of generality, one can suppose that ψ is such that (A_ 0(ψ), B_ 0(ψ))≠ (0,0). Then ( λ, ψ) solves the previous system if and only if, either λ=0 with A_ 0(ψ)= 0 and B_ 0(ψ)≠ 0 (and hence we see from (<ref>) that the eigenvalue 0 is spanned by sin∝ v_ 0) or λ= γ given by γ:=ℐ(1)-2=∫_S f'(Acos(x))dx-2, with A_ 0(ψ)≠ 0 and B_ 0(ψ)=0, so that the eigenspace related to γ is one-dimensional, spanned by cos∝ u_ 0. The fact that ⟨ u_ϕ , v_ϕ⟩_ϕ=0 follows immediately from the fact that u_ϕ is even and v_ϕ is odd. The last eigenvalue λ=-1 is spanned by ψ such that A(ψ)=B(ψ)=0. To conclude the proof of Proposition <ref>, it remains to prove the inequalities (<ref>), (<ref>) and (<ref>). We come back to a general ϕ∈ S. By definition of the projection P_ϕ^∘ in (<ref>), we have that ℒ_ϕ P_ϕ^∘= 0. Moreover, by definition of P_ϕ^⊥ in (<ref>), we have that for any g∈ L^2_ϕ, P_ϕ^⊥ g belongs in the orthogonal of Ker(ℒ_ϕ) in L^2_ϕ. Then ℒ_ϕ P_ϕ^⊥ =ℒ_ϕ (Id-P_ϕ^∘) generates a contraction semigroup on L^2(S) and (<ref>) follows then from functional analysis (see e.g. Theorem 3.1 of <cit.>). For the two last inequalities, we use Remark <ref>. From the definition of the projection P_ϕ^∘ in (<ref>), we have that e^tℒ_ϕP_ϕ^∘ g = ⟨ g,v_ϕ⟩_ϕ‖ v_ϕ‖_ϕe^tℒ_ϕ v_ϕ= ⟨ g,v_ϕ⟩_ϕ‖ v_ϕ‖_ϕ v_ϕ as v_ϕ∈Ker(ℒ_ϕ). We obtain then ‖ e^tℒ_ϕP_ϕ^∘ g ‖_ϕ≤‖ g ‖_ϕ‖ v_ϕ‖_ϕ. From (<ref>) we have ‖ e^tℒ_ϕP_ϕ^⊥ g ‖_ϕ≤ e^γ t‖ P_ϕ^⊥ g‖_ϕ≤ C_P ‖ g ‖_ϕ for some C_P>0, that is exactly (<ref>). As ‖ e^tℒ_ϕg‖_2≤‖ e^tℒ_ϕP_ϕ^∘ g‖_2 + ‖ e^tℒ_ϕP_ϕ^⊥ g‖_2, (<ref>) follows for the choice C_ℒ=C_1 C_2 max(sup_ϕ∈ S‖ v_ϕ‖_ϕ, C_P). §.§ Projections on the manifold We prove that both the variational phase seen in Proposition <ref> and isochronal phase seen in Proposition <ref> are well defined. (similar to <cit.>[Lemma 2.8]) Define for any (g,ϕ)∈ L^2(S)× S: F(g,ϕ):=∫_S ( g(x) - u_ϕ(x)) v_ϕ(x) f '(u_ϕ(x))dx=⟨ g-u_ϕ,v_ϕ⟩_ϕ. We have for any fixed ϕ_0, F(u_ϕ_0,ϕ_0)=0. Note that F is smooth in both variables as it can be written F(g,ϕ)=-A ∫_S ( g(x) - A cos(x+ϕ)(x)) sin(x+ϕ) f '(u_ϕ(x))dx. Moreover, ∂_ϕ F(u_ϕ_0,ϕ_0) = - ⟨ v_ϕ_0,v_ϕ_0⟩_ϕ_0=-A ^2ℐ_ϕ_0(sin^2) with ℐ_ϕ(r):=∫_S r(y+ϕ)f '(u_ϕ(y))dy. By invariance on the circle ℐ_ϕ_0(sin^2)=ℐ(sin^2) defined in (<ref>) and Lemma <ref> implies then that ∂_ϕ F(u_ϕ_0,ϕ_0) = -A ^2=-A(κ)^2≠ 0 with Proposition <ref>. By the implicit function theorem, for any ϕ_0 there exists a neighborhood 𝒱(u_ϕ_0) of u_ϕ_0 such that the projection is well defined (i.e. for any g∈𝒱(u_ϕ_0), there exists a unique ϕ such that F(g,ϕ)=0 and g↦proj_𝒰(g) is smooth). By compactness of 𝒰, the existence of ϖ and the result of Proposition <ref> follow. The situation can be summarized by the following Figure <ref>. We reproduce the argument of <cit.> that establishes the existence and regularity of the isochron map in a more general context than here. Let g∈ B(𝒰,ε_0) and (ϵ_n)_n a sequence decreasing to 0. The first step is to prove that θ(g) satisfying (<ref>) exists. To do so, using the stability of 𝒰 proved in Corollary <ref>, one can find an increasing sequence of times (t_n) and a sequence of closed non-empty sets Φ_n⊂𝒰 such that for all n∈ℕ and θ∈Φ_n, ‖ψ_t_n (g) - u_θ‖_2≤ Cϵ_n for some constant C>0. It gives in particular that the diameter of Φ_n tends to zero as n→∞, hence the existence of an unique θ(g) such that ∩_n∈ℕΦ_n={ u_θ(g)} by Cantor's Intersection Theorem. The second step is to prove the regularity of θ:B(𝒰,ε_0)→ S. As 𝒰 is parameterized by S, we can define π(u) for u∈𝒰 as the unique ϕ∈ S such that u=u_ϕ. As the flow ψ is 𝒞^∞, the map g↦ =lim_t→∞ψ_t(g) is well defined and 𝒞^∞, and we have also lim_t→∞ψ_t(g)=u_θ(g). Then θ(g) can be written as π( lim_t→∞ψ_t (g)), hence g↦θ(g) is indeed 𝒞^∞. We focus now on the derivatives of g↦θ(g). Define Γ: g∈ B(𝒰,ε_0)↦Γ(g)=lim_t→∞Ψ_t g =u_θ(g)∈𝒰. From Proposition <ref>, Γ is smooth and is differentiable, and for g,h∈ L^2(S), DΓ(g)[h]=u_θ(g)'Dθ(g)[h]=v_θ(g)Dθ(g)[h]∈ L^2. Applied for g=u_ϕ and taking the scalar product with v_ϕ, one obtains ⟨ DΓ(u_ϕ)[h],v_ϕ⟩=Dθ(u_ϕ)[h] ‖ v_ϕ‖^2. Let us focus on DΨ_t g[h]. Let g_t be the solution of (<ref>) with g_0=g, that is g_t=Ψ_t(g), and h_t the solution of (<ref>) with h_0=g+h, that is h_t=Ψ_t(g+h). Then ∂_t(h_t-g_t) =-(h_t-u_t)+cos∗( f(h_t)-f(g_t)) =-(h_t-g_t)+cos∗( f'(g_t)(h_t-g_t))+ r_t with Taylor's formula and where r_t:=cos∗( (h_t-g_t)^2 ∫_0^1 (1-s)f”(g_t+s(h_t-g_t))ds)=o(‖ h ‖). We have then that DΨ_t(g)[h]=:w_t with ∂_t w_t=-w_t+cos∗( f'(Ψ_t g)(w_t)), w_0=h. In particular for the choice g=u_ϕ, DΨ_t(u_ϕ)[h]=e^tℒ_ϕh where ℒ_ϕ is defined in (<ref>). Moreover we can write with the operators defined in Definition <ref> e^tℒ_ϕh=e^tℒ_ϕ( P_ϕ^∘ h+ P_ϕ^⊥ h)= ⟨ h, v_ϕ⟩_ϕ‖ v_ϕ‖_ϕ v_ϕ + e^tℒ_ϕ P_ϕ^⊥ h. From (<ref>), ‖ e^tℒ_ϕ P_ϕ^⊥ h ‖_ϕ≤ e^tγ‖ P_ϕ^⊥ h‖_ϕ hence lim_t→∞e^tℒ_ϕh= ⟨ h, v_ϕ⟩_ϕ‖ v_ϕ‖_ϕ v_ϕ. As Γ(u_ϕ)=lim_t→∞Ψ_tu_ϕ=u_ϕ and lim_t→∞DΨ_t(u_ϕ)[h]=⟨ h, v_ϕ⟩_ϕ‖ v_ϕ‖_ϕ v_ϕ, we obtain that DΓ(u_ϕ)[h]= D(lim_t→∞Ψ_t u_ϕ)[h]=lim_t→∞ DΨ_t (u_ϕ)[h]=lim_t→∞ e^tℒ_ϕh=⟨ h, v_ϕ⟩_ϕ‖ v_ϕ‖_ϕ v_ϕ, which gives with (<ref>) the result (<ref>). We focus now on D^2θ. Recall Γ, for g,h,l∈ B(𝒰,ε_0), D^2Γ(g)[h,l]=-Dθ(g)[h]Dθ(g)[l]u_θ (g) + D^2θ(g)[h,l]v_θ(g). Applied for g=u_ϕ, it gives with (<ref>) D^2Γ(u_ϕ)[h,l]=-⟨ v_ϕ,h⟩_ϕ⟨ v_ϕ,l⟩_ϕ‖ v_ϕ‖_ϕ^2u_ϕ + D^2θ(u_ϕ)[h,l]v_ϕ. Taking the scalar product with v_ϕ, as ⟨ u_ϕ,v_ϕ⟩_ϕ=0 we obtain D^2θ(u_ϕ)[h,l] = ⟨ D^2Γ(u_ϕ)[h,l],v_ϕ⟩_ϕ‖ v_ϕ‖_ϕ^2. Let us focus on D^2Ψ_t g[h,l]. We have that DΨ_t(g)[h]=w_t, recall that it solves (<ref>). Let DΨ_t(g+l)[h]:=w̃_t, it solves ∂_t w̃_t=-w̃_t+cos∗( f'(Ψ_t (g+l))w̃_t), w̃_0=h. As done before, we obtain that ζ_t:=w̃_t-w_t solves with ζ_0=0 ∂_tζ_t = - ζ_t+ cos∗[ f'( Ψ_t(g+l)) (ζ_t+w_t)- f'(Ψ_t g)w_t] = - ζ_t+ cos∗[ f'( Ψ_t(g+l)) ζ_t] +cos∗[ ( f'( Ψ_t(g+l)) - f'(Ψ_t g))w_t]. From Taylor expansion in l, f'( Ψ_t(g+l)) =f'(Ψ_t(g)+DΨ_t(g)[l] + ∫_0^1 (1-s)D^2Ψ_t(g)[l]^2ds ) = f'(Ψ_t(g)) +f”(Ψ_t(g)) DΨ_t(g)[l] + o(‖ l ‖) hence cos∗[ f'( Ψ_t(g+l)) ζ_t] = cos∗( f'(Ψ_t(g)) ζ_t)+ O(‖ l ‖), and cos∗[ ( f'( Ψ_t(g+l)) - f'(Ψ_t g))w_t] =cos∗( f”(Ψ_tg)DΨ_t g[l] w_t) + o(‖ l ‖) = cos∗( f”(Ψ_tg)DΨ_t g[l]DΨ_t g[h]) +o(‖ l ‖). We obtain then after linearizing that D^2Ψ_t g[h,l]=ξ_t is solution of ∂_t ξ_t = -ξ_t + cos∗(f'(Ψ_t g) ξ_t ) +cos∗( f”(Ψ_tg)DΨ_t g[l]DΨ_t g[h]), ξ_0=0. In particular, for the choice g=u_ϕ, ∂_t ξ_t = ℒ_ϕξ_t + cos∗[ f”(u_ϕ) ( e^tℒ_ϕh) ( e^tℒ_ϕl) ], ξ_0=0, hence it solves the mild equation ξ_t= ∫_0^t e^(t-s)ℒ_ϕ( cos∗( f”(u_ϕ) ( e^sℒ_ϕh) ( e^sℒ_ϕl) ) ) ds. Recall (<ref>), hence we focus now on ⟨ξ_t,v_ϕ⟩_ϕ. From Proposition <ref>, ℒ_ϕ is self-adjoint hence ⟨ξ_t,v_ϕ⟩_ϕ = ∫_0^t ⟨cos∗( f”(u_ϕ) ( e^sℒ_ϕh) ( e^sℒ_ϕl) ), e^(t-s)ℒ_ϕ v_ϕ⟩_ϕ ds = ∫_0^t ⟨cos∗( f”(u_ϕ) ( e^sℒ_ϕh) ( e^sℒ_ϕl) ), v_ϕ⟩_ϕ ds as v_ϕ∈Kerℒ_ϕ. Recall (<ref>) and (<ref>). By the spectral decomposition of ℒ_ϕ along its eigenvalues 0, γ and -1, one has with Proposition <ref>, for s≥0, e^s ℒ_ϕh = α_ϕ^∘(h) v_ϕ + e^s γα_ϕ^γ(h) u_ϕ + e^-s(h- α_ϕ^∘(h) v_ϕ -α_ϕ^γ(h) u_ϕ) = e^-s h + α_ϕ^∘(h) (1- e^-s) v_ϕ + α_ϕ^γ(h) (e^ s γ - e^ -s) u_ϕ, so that one obtains (e^s ℒ_ϕh)(e^s ℒ_ϕl) = α_ϕ^∘(h) α_ϕ^∘(l) (1- e^ -s)^ 2 v_ϕ^ 2 + α_ϕ^γ(h) α_ϕ^γ(l) (e^ s γ - e^ -s)^ 2 u_ϕ^ 2 +e^ -s(1-e^ -s) {α_ϕ^∘(h) l+ α_ϕ^∘(l) h} v_ϕ + e^ -s(e^ s γ - e^ -s) {α_ϕ^γ(h) l + α_ϕ^γ(l) h} u_ϕ + (1- e^ -s) (e^ s γ- e^ - s) {α_ϕ^∘(h) α_ϕ^γ(l)+ α_ϕ^∘(l) α_ϕ^γ(h)} u_ϕ v_ϕ + e^ -2 s hl. We compute now ⟨ξ_ t , v_ϕ⟩_ϕ based on the previous decomposition. Fix some generic test functions h and l. Then ⟨cos∗(f^''(u_ϕ) hl) , v_ϕ⟩_ϕ = ∫_ S v_ϕ(x) f^'(u_ϕ(x)) ∫_ Scos(x-y) f^''(u_ϕ)(y) h(y)l(y) dy  dx. Expanding the cosine within the convolution and noticing that ∫_ S v_ϕ(x) f^'(u_ϕ(x)) cos(x+ ϕ) dx=0, we have with Lemma <ref> ⟨cos∗(f^''(u_ϕ) hl) , v_ϕ⟩_ϕ = (∫_ S v_ϕ(x) f^'(u_ϕ(x)) sin(x+ ϕ) dx ) ∫_ Ssin(y+ ϕ) f^''( u_ϕ(y))h(y)l(y) dy, = -A ℐ(sin^2) ∫_ Ssin(y+ ϕ) f^''( u_ϕ(y))h(y)l(y) dy= ∫_ S f^''( u_ϕ(y)) v_ϕ(y)h(y)l(y) dy. If now we take h=l= v_ϕ or h=l= u_ϕ, we see that the two terms of (<ref>) give a zero contribution to ⟨ξ_ t , v_ϕ⟩_φ as the function within the last integral is odd. Taking now h=v_ϕ (resp. h=u_ϕ) for given l, we see that the generic term within (<ref>) (resp. (<ref>)) gives rise to ⟨cos∗(f^''(u_ϕ) l v_ϕ) , v_ϕ⟩_ϕ = ∫_ S f^''(u_ϕ) v_ϕ(y)^ 2l(y) dy, ⟨cos∗(f^''(u_ϕ) l u_ϕ) , v_ϕ⟩_ϕ = ∫_ S f^''(u_ϕ) v_ϕ(y) u_ϕ(y)l(y) dy. Applying finally the last expression for l=v_ϕ gives for (<ref>), by integration by parts ⟨cos∗(f^''(u_ϕ) u_ϕv_ϕ) , v_ϕ⟩_ϕ = ∫_ S f^''(u_ϕ) v_ϕ(y)^ 2 u_ϕ(y) dy= - ∫_ S d/ dy{ u_ϕ(y) v_ϕ(y)} f^'(u_ϕ(y)) dy, =- ∫_ S v_ϕ(y)^ 2 f^'(u_ϕ(y)) dy+ ∫_ S u_ϕ(y)^ 2 f^'(u_ϕ(y)) dy= A^ 2γ, where we used (<ref>). Recall the definition of β_ϕ in (<ref>), putting all these estimates together we obtain ⟨ξ_t,v_ϕ⟩_ϕ =∫_0^t [ e^-s(1-e^-s)( α_ϕ^∘(h)β_ϕ(v_ϕ,l) +α_ϕ^∘(l)β_ϕ(v_ϕ,h)) . .+ e^-s(e^sγ-e^-s) ( α_ϕ^γ(h)β_ϕ(u_ϕ,l) +α_ϕ^γ(l)β_ϕ(u_ϕ,h)) . . + (1-e^-s)(e^sγ-e^-s) A^2γ( α_ϕ^∘(h)α_ϕ^∘(l)+α_ϕ^γ(h)α_ϕ^γ(l))+ e^-2 sβ_ϕ(h,l) ] ds, so that lim_ t→∞⟨ξ_t , v_ϕ⟩_ϕ= 1/ 2( α_ϕ^∘(h)β_ϕ(v_ϕ,l) +α_ϕ^∘(l)β_ϕ(v_ϕ,h)) + 1+ γ/ 2(1- γ)( α_ϕ^γ(h)β_ϕ(u_ϕ,l) +α_ϕ^γ(l)β_ϕ(u_ϕ,h)) - A^ 2 (2- γ)(1+ γ)/ 2(1-γ)( α_ϕ^∘(h)α_ϕ^∘(l)+α_ϕ^γ(h)α_ϕ^γ(l)) + 1/ 2β_ϕ(h,l). As D^2θ(u_ϕ)[h,l]=1A^2lim_t→∞⟨ξ_t,v_ϕ⟩_ϕ, we obtain (<ref>). § LONG TIME BEHAVIOR (PROOFS) The aim of this section is to prove Theorem <ref>. §.§ Main structure of the proof of Theorem <ref> First, fix some constant η such that 0<η<1/4. We also look for some T>0 that verifies C_PC_ℒ e^Tγ≤ 1/4, where C_P, C_ℒ and γ are introduced in Proposition <ref>. We first define the initial time T_0(N) thanks to the following Proposition, whose proof is postponed to Section <ref>. In the framework of Theorem <ref>, there exists a deterministic phase θ_0∈ S, an event B_N such that 𝐏(B_N)1 and a constant C>0 such that for all ε >0, for N sufficiently large, on the event B_N, the projection ψ=ψ_0^N=proj(U_N(Clog N)) is well defined and ‖ U_N(Clog N) - u_ψ_0^N‖_2≤N^2η√(N), |ψ_0^N - θ_0 |≤ε. We define T_0(N) thanks to Proposition <ref> by T_0(N)=Clog(N). Define the time discretisation of the interval [T_0(N),N^ατ_f] into subintervalls of length T, [T_n, T_n+1]: define n_f=inf{n∈ℕ,   N^ατ_f ≤ T_0(N)+n T} and for n=0,⋯, n_f-1, T_n=T_0(N)+nT. Let T_f(N):=T_n_f, by construction, T_f(N)≥ N^ατ_f. We prove in fact a more precise result that Theorem <ref> as stated in Remark <ref>: we show that there exists some C>0 such that we have 𝐏(sup_t∈ [T_0(N),T_f(N)]dist_L^2( U_N(t),𝒰) ≤ CN^η-1/2) 1. We focus on a process (V_n(t))_n∈ 1,n_f, t∈ [0,T] that iteratively compares U_N and its projection on 𝒰 at each step. We ensure it is correctly defined in the next part, then we give the main proof before the proof of some technical results we also need. Discretization In order to define the projection of U_N(T_n) into 𝒰, following Proposition <ref>, we need to ensure that dist_L^2(U_N(T_n),𝒰)≤ϖ. In order to do so, we introduce the stopping couple (n_τ,τ):=inf{ (n,t)∈ 1,n_f× [0,T]: dist_L^2(U_N(T_n-1+t),𝒰) > ϖ}, where the infimum corresponds to the lexicographic order. We introduce then τ_n:= {[ T if n<n_τ; τ if n≥ n_τ. ]. The process we consider is then (U_N(T_n∧ n_τ-1+t∧τ_n))_n∈ 1,n_f, t∈ [0,T]. The projection of this stopped process is well defined on the whole interval [T_0(N),T_f(N)] by construction, so that we can now define rigorously the random phases ϕ_n-1 for n=1,⋯,n_f by ϕ_n-1:=proj(U_N(T_n∧ n_τ-1). The object of interest is then the process V_n(t) of L^2(S) defined for n=1,⋯,n_f and t∈ [0,T] by V_n(t):=U_N(T_n∧ n_τ-1+t∧τ_n)-u_ϕ_n-1, as (<ref>) translates then into There exists an event Ω_N with 𝐏(Ω_N)1 such that on Ω_N, sup_1≤ n ≤ n_fsup_t∈[0,T]‖ V_n(t)‖_2 =O(N^2η√(N)), where the error is uniform on Ω_N. Here are the steps of the proof of Proposition <ref>. Step 1 - We show that the process (V_n(t))_n∈ 1,n_f, t∈ [0,T] satisfies the mild equation V_n(t)=e^(t∧τ_n)ℒ_ϕ_n-1V_n(0) + ∫_0^t∧τ_n e^(t∧τ_n-s)ℒ_ϕ_n-1R_n(s)ds+ ζ_n(t∧τ_n) where ζ_n(t):= ∫_0^t e^(t-s)ℒ_ϕ_n-1dM_N(s), and R_n(t)= cos∗(y↦ V_n(t)(y)^2 ∫_0^1 f ”( u_ϕ_n-1(y)+rV_n(t)(y))(1-r)dr ) + (∑_i,j=1^N 2πcos(x_i-x_j)N f (U_N,j(t-))1_B_N,i- cos∗ f (U_N(t))), where the notation ∗ stands for the convolution f*g(x)=∫_-π^π f(x-y)g(y)dy. The rigorous meaning of (<ref>) is given in Proposition <ref>, postponed to Section <ref>. Step 2 - We show a control of several terms of (<ref>) with the following Proposition, whose proof is postponed to Section <ref>. Define the event A_N:={sup_1≤ n ≤ n_fsup_t∈[0,T]‖ζ_n(t) ‖_2≤N^η√(N)}. In the framework of Theorem <ref>, 𝐏(A_N)1. Now let Ω_N:=A_N∪ B_N (recall B_N from Proposition <ref>) , we have 𝐏(Ω_N)1 with Propositions <ref> and <ref>. For the rest of the proof, we place ourselves now on this event Ω_N. Step 3 - Based on Steps 1 and 2 above, it remains to prove (<ref>). We proceed by induction. We know (as Ω_N⊂ B_N) that ‖ V_1(0)‖≤ N^2η-1/2. Suppose that ‖ V_n(0)‖_2≤ N^2η-1/2 for some n≥ 1. From the mild formulation satisfied by (V_n(t)) seen in (<ref>) we get ‖ V_n(t) ‖_2= ‖ e^(t∧τ_n)ℒ_ϕ_n-1V_n(0)‖_2 + ‖∫_0^t∧τ_n e^(t∧τ_n-s)ℒ_ϕ_n-1R_n(s)ds‖_2+ ‖ζ_n(t∧τ_n)‖_2. Recall (<ref>) and Proposition <ref>, by definition of the phase projection, P_ϕ_n-1,0(U_N(T_n∧ n_τ-1)-u_ϕ_n-1)=0 hence V_n(0)=U_N(T_n∧ n_τ-1)-u_ϕ_n-1= P_ϕ_n-1,s V_n(0). Proposition <ref> and more especially (<ref>) give then, with the induction hypothesis ‖ e^(t∧τ_n)ℒ_ϕ_n-1 V_n(0)‖_ϕ_n-1≤ e^(t∧τ_n)γ‖ V_n(0)‖_ϕ_n-1≤ C_0 e^(t∧τ_n)γN^2η-1/2 where C_0 is introduced in (<ref>). From Proposition <ref>, we have ‖∫_0^t∧τ_n e^(t∧τ_n-s)ℒ_ϕ_n-1R_n(s)ds‖_2≤ TC_ℒsup_0≤ s≤ T‖ R_n(s)‖_2. By definition of A_N, sup_1≤ n ≤ n_fsup_t∈[0,T]‖ζ_n(t) ‖_2 ≤ N^η-1/2 as we are on Ω_N. We obtain then, for any t∈ [0,T] ‖ V_n(t) ‖_2≤ C_0e^(t∧τ_n)γN^2η-1/2 + TC_ℒsup_0≤ s ≤ T‖ R_n(s)‖_2+ N^η-1/2. For any t∈ [0,T], recalling (<ref>), sup_0≤ s ≤ t‖ R_n(s)‖_2 ≤sup_0≤ s ≤ t‖cos∗(y↦ V_n(s)(y)^2 ∫_0^1 f ”( u_ϕ_n-1(y)+rV_n(s)(y))(1-r)dr )‖_2 +sup_0≤ s ≤ t‖∑_i,j=1^N 2πcos(x_i-x_j)N f (U_N,j(s-))1_B_N,i- cos∗ f (U_N(s))‖_2 = (A) + (B). Using Young's inequality ‖ u ∗ v ‖_2 ≤‖ u ‖_1 ‖ v ‖_2 and the boundedness of f”, we have (A) ≤sup_0≤ s ≤ t( ‖cos‖_2 ∫_S | V_n(s)(y)^2 ∫_0^1 f ”( u_ϕ_n-1(y)+rV_n(s)(y))(1-r)dr | dy) ≤ Csup_0≤ s ≤ t‖ V_n(s) ‖_2^2 for some positive C. For the second term (B) of (<ref>), we introduce Υ_1,i,s = 2πN∑_j=1^N cos(x_i-x_j) ( f(U_N,j(s-))-f(U_N,j(s)) ) Υ_2,i,s = 2πN∑_j=1^N cos(x_i-x_j)f(U_N,j(s))-∫_S cos(x_i-y)f(U_N(s)(y))dy Υ_3,i,s(x) =∫_S( cos(x_i-y)-cos(x-y))f(U_N(s)(y))dy, x∈ S. From the Lipschitz continuity of f and the fact that Z_N,1,⋯,Z_N,N do not jump simultaneously, |Υ_1,i,s|≤CN hence ‖∑_i=1^NΥ_1,i,s1_B_N,i‖_2^2=O(1N^2). As 1_B_N,i1_B_N,j≡ 0 for i≠ j, for any 0≤ s ≤ t we have ‖∑_i=1^N Υ_2,i,s1_B_N,i‖_2^2= 2πN∑_i=1^N (∑_j=1^N ∫_B_N,j( cos(x_i-x_j)-cos(x_i-y)) f(U_N(s)(y))dy )^2. As f is bounded (by 1) and cos is 1-Lipschitz continuous, we obtain ‖∑_i=1^N Υ_2,i,s1_B_N,i‖_2^2≤2πN∑_i=1^N (∑_j=1^N ∫_B_N,j| x_j-y| dy )^2≤8π^5N^2. Similarly, ‖∑_i=1^N Υ_3,i,s1_B_N,i‖_2^2 = ∫_S ∑_i=1^N Υ_3,i,s(x)^21_B_N,i(x)dx = ∑_i=1^N ∫_B_N,i(∫_S( cos(x_i-y)-cos(x-y))f(U_N(s)(y))dy )^2dx ≤∑_i=1^N ∫_B_N,i(∫_S | x_i-x| dy )^2dx ≤8π^5N^2. Hence we have for some positive C_R,1 sup_0≤ s ≤ t‖ R_n(s)‖_2≤ C_R,1( sup_0≤ s ≤ t‖ V_n(s) ‖_2^2 + 1N). Define then t^* as t^*:= inf{ t∈ [0,T]: ‖ V_n(t) ‖_2≥ 2C_0N^2η√(N)}. Note that with no loss of generality, one can assume that C_0>1. Since by assumption ‖ V_n(0)‖_2≤N^2η√(N)<C_0 N^2η√(N), we have ‖ V_n(t)‖_2≤ 2C_0 N^2η√(N) at least for t<t_1 where t_1 is the first jump among (Z_N,1,⋯,Z_N,N). Hence t^*>0. If t≤ t^*, sup_0≤ s ≤ t‖ R_n(s)‖_2≤ C_R,2 N^4η-1 (as η>0, N^-1≪ N^4η-1). Coming back to (<ref>), we obtain that (for some positive constant C_R) ‖ V_n(t) ‖_2≤ C_0e^(t∧τ_n)γN^2η-1/2 + TC_R N^4η-1+ N^η-1/2. Since 0<η<14, N^4η-1≪ N^2η-1/2 hence for N large enough TC_R N^4η-1+ N^η-1/2≤ C_0 N^2η-1/2 thus as γ<0, t^*=T. By construction of the stopping time τ_n in (<ref>), we have then that τ_n=T, hence sup_0≤ t≤ T‖ V_n(t)‖_2≤ 2C_0N^2η-1/2 . To conclude the induction, we need to show that ‖ V_n+1(0)‖_2≤ N^2η-1/2. By definition (<ref>) and as τ_n=T, V_n+1(0)= U_N(T_n)-u_ϕ_n and V_n(T)=U_N(T_n)-u_ϕ_n-1 hence V_n+1(0)=V_n(T)+u_ϕ_n-1-u_ϕ_n. Moreover, as V_n+1(0)=P_ϕ_n^⊥ V_n+1(0) since by definition V_n+1(0)∈Ker( ℒ_ϕ_n)^⊥ (recall Proposition <ref>), we obtain V_n+1(0) = P_ϕ_n^⊥(V_n(T)+u_ϕ_n-1-u_ϕ_n) = (P_ϕ_n^⊥ -P_ϕ_n-1^⊥)V_n(T)+P_ϕ_n-1^⊥ V_n(T) +P_ϕ_n^⊥( u_ϕ_n-1-u_ϕ_n). We are going to control each term of (<ref>). First, using the smoothness of the phase projection from Proposition <ref>, |ϕ_n-1-ϕ_n | = |proj( U_N(T_(n-1)∧ n_τ -1)) - proj( U_N(T_n-∧ n_τ -1))| ≤ C_proj‖ U_N(T_(n-1)∧ n_τ -1) - U_N(T_n-∧ n_τ -1)‖_2 ≤ C_proj‖ V_n-1(0) - V_n-1(T)‖_2≤ C N^2η-1/2, using (<ref>). Recall (<ref>) and (<ref>), we have for any x∈ S u_ϕ_n-1(x)-u_ϕ_n(x) = A cos(x+ϕ_n-1) -A cos(x+ϕ_n) = -2A sin( ϕ_n-1-ϕ_n) sin( x+ϕ_n + ϕ_n-1-ϕ_n2) = 2 sin( ϕ_n-1-ϕ_n) (cos( ϕ_n-1-ϕ_n2) v_ϕ_n(x)-sin( ϕ_n-1-ϕ_n2) u_ϕ_n(x) ) thus, as P_ϕ_n^⊥ v_ϕ_n=0, P_ϕ_n^⊥(u_ϕ_n-1-u_ϕ_n) = -2 sin( ϕ_n-1-ϕ_n))sin( ϕ_n-1-ϕ_n2) P_ϕ_n^⊥ u_ϕ_n. As u_ϕ_n is bounded and sin is Lipschitz continuous, we obtain with (<ref>) a control of the third term of (<ref>) ‖ P_ϕ_n^⊥( u_ϕ_n-1-u_ϕ_n)‖_2≤ C (ϕ_n-1-ϕ_n)^2 = O(N^4η-1). Similarly, recall (<ref>), ϕ↦ P_ϕ^⊥ is smooth, hence for some C>0 ‖(P_ϕ_n^⊥ -P_ϕ_n-1^⊥)V_n(T)‖_2≤ C |ϕ_n-1-ϕ_n |‖ V_n(T)‖= O(N^4η-1). Combining (<ref>) and (<ref>) in (<ref>), using (<ref>) at time t=T and recalling Proposition <ref>, we obtain for N large enough ‖ V_n+1(0)‖_2≤‖ P_ϕ_n-1^⊥ V_n(T) ‖_2 + O(N^4η-1)≤ 2C_PC_0 e^Tγ N^2η-1/2+ O(N^4η-1). From the choice of T satisfying (<ref>), the fact that ‖ V_n+1(0)‖_2≤ N^2η-1/2 follows and the recursion is concluded, so that Theorem <ref> follows. §.§ About the mild formulation Step 1 of Section <ref> is a direct consequence of the following proposition. Fix ϕ∈ S and 0<t_a<t_b. Recall the definition of U_N in (<ref>), and define, for any t∈ [t_a,t_b], U_N,ϕ(t)=U_N(t)-u_ϕ. The process (U_N,ϕ(t))_t∈[t_a,t_b] satisfies the following semimartingale decomposition in D([t_a,t_b],L^2(S)), written in a mild form: for any t_a≤ t≤ t_b U_N,ϕ(t)=e^(t-t_a)ℒ_ϕU_N,ϕ(t_a) + ∫_t_a^t e^(t-s)ℒ_ϕr_N,ϕ(s)ds+∫_t_a^t e^(t-s)ℒ_ϕ dM_N(s), with M_N(t)= ∑_i=1^N ∑_j=1^N 2πcos(x_i-x_i)N( Z_N,j(t) - ∫_0^tλ_N,j(s)ds) 1_B_N,i and r_N,ϕ(t)= cos∗(y↦U_N,ϕ(t)(y)^2 ∫_0^1 f ”( u_ϕ(y)+rU_N,ϕ(t)(y))(1-r)dr ) + (∑_i,j=1^N 2πcos(x_i-x_j)N f (U_N,j(t-))1_B_N,i- cos∗ f (U_N(t))). From (<ref>), we obtain that U_N verifies dU_N(t)=-U_N(t)dt+∑_i,j=1^N 2πcos(x_i-x_j)NdZ_N,j(t)1_B_N,i. The centered noise M_N defined in (<ref>) verifies dM_N(t):= ∑_i=1^N ∑_j=1^N 2πcos(x_i-x_j)N( dZ_N,j(t) - f (U_N,j(t-))dt) 1_B_N,i, and is a martingale in L^2(S). Thus recalling that u_ϕ solves (<ref>) and by inserting the terms ∑_i=1^N ∑_j=1^N 2πcos(x_i-x_j)Nf (U_N,j(t-))dt1_ B_N,i and u_ϕ, we obtain dU_N,ϕ(t)=-U_N,ϕ(t)dt + dM_N(t) + (∑_i,j=1^N 2πcos(x_i-x_j)N f (U_N,j(t-))1_B_N,i-∫_-π^πcos(·-y)f (u_ϕ(y))dy)dt. A Taylor's expansion gives that for any y∈ S, f (U_N(t)(y))-f (u_ϕ(y))=f '(u_ϕ(y))U_N,ϕ(t)(y)+∫_0^1 f ”( u_ϕ(y)+rU_N,ϕ(t)(y))(1-r)dr U_N,ϕ(t)(y)^2, hence identifying the operator ℒ_ϕ defined in (<ref>) we have dU_N,ϕ(t)= ℒ_ϕU_N,ϕ(t)dt + dM_N(t) + ∫_-π^πcos(· - y)∫_0^1 f ”( u_ϕ(y)+rU_N,ϕ(t)(y))(1-r)dr U_N(t)(y)^2dydt + (∑_i,j=1^N 2πcos(x_i-x_j)N f (U_N,j(t-))1_B_N,i-∫_-π^πcos(·-y) f (U_N(t)(y)))dt, and recognizing r_N,ϕ defined in (<ref>) we have dU_N,ϕ(t) = ℒ_ϕU_N,ϕ(t)dt + r_N,ϕ(t)dt + dM_N(t). Then the mild formulation (<ref>) is a direct consequence of Lemma 3.2 of <cit.>: the unique strong solution to (<ref>) is indeed given by (<ref>). §.§ About the initialisation We prove here Proposition <ref>, that we use to define the initial time T_0(N) and in the second part of Step 2 of Section <ref>. To prove Proposition <ref>, we proceed in several steps, as done in <cit.>[Proposition 2.9]. Step a. We rely on the convergence in finite time of U_N to its large population limit, that is u_t solving (<ref>) with initial condition ρ. From the deterministic behavior of u_t and the stability of 𝒰, U_N approaches 𝒰 in a 2ε_0-neighborhood; and this takes a time interval of order |logε_0|. Step b. We rely on the stability of 𝒰 and the control ofn the noise to show that, from a 2ε_0-neighborhood, U_N approaches 𝒰 in a N^2η-1/2-neighborhood; and this takes a time interval of order log N. Step c. We ensure that U_N stays at distance N^2η-1/2 from 𝒰 at time T_0(N). Step a. We focus first on ψ_t(ρ), solution to (<ref>) with initial condition ρ∈ B(𝒰,ε_0). Thanks to Corollary <ref>, we have that it converges as t→∞ towards some u_θ_0∈𝒰. Thus, there exists a time s_1≥ 0 such that ‖ u_s_1 - u_θ_0‖_2≤ε_0, and this time is of order 1γlogε_0. We focus then on the random profile U_N. We use a mild formulation similar to the one used in Proposition <ref>: one can obtain, with u_t solving (<ref>) d( U_N(t)-u_t)=-( U_N(t)-u_t)dt + dM_N(t) + ( ∑_i,j=1^N 2πcos(x_i-x_j)Nf( U_N,j(t-))1_B_N,i-∫_-π^πcos(·-y)f(u_t(y))dy)dt, where M_N is defined in (<ref>). We have then for any t≥ 0 U_N(t)-u_t=e^-t(U_N(0)-ρ) + ∫_0^t e^-(t-s)dM_N(s) + ∫_0^t e^-(t-s)r_N(s)ds with r_N(s):=∑_i 1_B_N,i∑_j2πcos(x_i-x_j)N( f( U_N,j(s-)) - f( U_N,j(s)) ) + ∑_i 1_B_N,i( ∑_j2πcos(x_i-x_j)Nf( U_N,j(s)) - ∫_-π^πcos(x_i-y)f(U_N(s)(y))dy) + ∑_i 1_B_N,i∫_-π^π(cos(x_i-y)-cos(·-y))f(U_N(s)(y))dy + ∫_-π^πcos(·-y)( f(U_N(s)(y)) - f(u_s(y)))dy = ∑_i=1^N 1_B_N,i( Υ_1,i,s + Υ_2,i,s+ Υ_3,i,s) + Υ_4,s. As done for Υ_1,i,s, Υ_2,i,s and Υ_3,i,s (<ref>) in Proposition <ref>, we have for some C>0 ‖∑_i=1^N 1_B_N,i( Υ_1,i,s + Υ_2,i,s+ Υ_3,i,s)‖_2^2 ≤CN^2. Moreover an immediate computation gives, as f is Lipschitz continuous ‖Υ_4,s‖_2≤ C ‖ U_N(s)-u_s‖_2. Then we have for any t∈ [0,s_1] with ζ_N(s):=∫_0^s e^-(s-u)dM_N(u), ‖ U_N(t)-u_t ‖_2≤‖ U_N(0)-ρ‖_2 + ‖ζ_N(t)‖_2 + CN+∫_0^t e^-(t-s)‖ U_N(s)-u_s ‖_2 ds. Take N sufficiently large so that ‖ U_N(0)-ρ‖_2≤ε_02. We place ourselves on the event C_N:={sup_t∈ [0,s_1]‖ζ_N(t)‖_2≤ N^η-1/2}. As done in Proposition <ref>, 𝐏(C_N)1. Going back to (<ref>), we have on C_N ‖ U_N(t)-u_t ‖_2≤ε_02+ N^η-1/2+ CN+∫_0^t e^-(t-s)‖ U_N(s)-u_s ‖_2 ds. We deduce with Grönwall lemma that for N large enough, ‖ U_N(s_1)-u_s_1‖_2≤ε_0 on C_N, which means that ‖ U_N(s_1)-u_θ_0‖_2≤ 2ε_0 hence dist(U_N(s_1),𝒰)≤ 2ε_0. Choosing ε_0 small enough so that 2ε_0<ϖ (recall Proposition <ref>), we can define ψ_0^1=proj(U_N(s_1)) and |ψ_0^1 - θ_0 |≤ Cε_0. Step b. Since we know that dist_L^2( U_N(s_1),𝒰)≤ 2ε_0 with increasing probability as N→∞, we show that U_N approaches 𝒰 up to a distance N^2η-1/2 doing a similar iteration as in Proposition <ref>. Define the sequence (h_n) such that h_1=2ε_0 and h_n+1=h_n/2, and let n_f:=inf{n≥ 1, h_n≤ N^2η-1/2}. Note that such n_f is of order O(log N). Fix T satisfying C_PC_0 e^Tγ≤ 1/4, and define then for any n∈ 1,n_f the times T_n=s_1+(n-1) T. As in (<ref>) and (<ref>), define (n_τ,τ):=inf{ (n,t)∈ 1,n_f× [0,T]: dist_L^2(U_N(T_n-1+t),𝒰) > ϖ}, and τ_n:= {[ T if n<n_τ; τ if n≥n_τ. ]. The process we consider is then (U_N(T_n∧ n_τ-1+t∧τ_n))_n∈ 1,n_f, t∈ [0,T], which is exactly (U_N(t))_t∈ [s_1,T_n_f] unless the process has been stopped. The projection of this stopped process is well defined on the whole interval, so that we can now define rigorously the random phases ϕ_n-1 for n=1,⋯,n_f by ϕ_n-1:=proj(U_N(T_n∧n_τ-1). The object of interest is then the process V_n(t) of L^2(S) defined for n=1,⋯,n_f and t∈ [0,T] by V_n(t):=U_N(T_n∧n_τ-1+t∧τ_n)-u_ϕ_n-1. It satisfies the mild equation V_n(t)=e^(t∧τ_n)ℒ_ϕ_n-1V_n(0) + ∫_0^t∧τ_n e^(t∧τ_n-s)ℒ_ϕ_n-1R_n(s)ds+ ζ_n(t∧τ_n) where ζ_n(t):= ∫_0^t e^(t-s)ℒ_ϕ_n-1dM_N(s), and R_n(t)= cos∗(y↦V_n(t)(y)^2 ∫_0^1 f ”( u_ϕ_n-1(y)+rV_n(t)(y))(1-r)dr ) + (∑_i,j=1^N 2πcos(x_i-x_j)N f (U_N,j(t-))1_B_N,i- cos∗ f (U_N(t))). Define the event B_N:=C_N⋂{sup_n∈ 1,n_fsup_t∈ [0,T]‖ζ_n(t)‖_2≤ N^η-1/2}. As done in Proposition <ref>, 𝐏(B_N)→ 1 and from now on we work under B_N. We want to show by induction that on B_N, for all n∈ 1 , ñ_f, V_n(0)≤ h_n. The first step of the proof ensures that on C_N, V_1(0)≤ h_1. Assume for some n<n_f, V_n(0)≤ h_n. From the mild formulation (<ref>) we obtain (as done in (<ref>)) ‖V_n(t) ‖_2≤ C_0e^(t∧τ_n)γh_n + TC_ℒsup_0≤ s≤T‖R_n(s)‖_2+ N^η-1/2. Define then t^* as t*:= inf{ t∈ [0,T]: ‖V_n(t) ‖_2≥ 2C_0h_n}. We have t^*>0, and if t≤t^*, sup_0≤ s ≤ t‖R_n(s)‖_2≤ C_R_2(h_n^2+N^-1), as done in (<ref>). Coming back to (<ref>), we obtain that (for some positive constant C_R) ‖V_n(t) ‖_2≤ C_0e^(t∧τ_n)γh_n + TC_R(h_n^2 + N^-1)+ N^η-1/2. Since n<n_f, 2ε_0≥ h_n>N^2η-1/2 hence for N large enough, N^η-1/2, N^-1 are negligible with respect to h_n, same for h_n^2 thus t^*≥T. To conclude the induction, we need to show that ‖V_n+1(0)‖≤ h_n+1=h_n/2. As shown in (<ref>), V_n+1(0) = (P_ϕ_n^⊥ -P_ϕ_n-1^⊥)V_n(T)+P_ϕ_n-1^⊥V_n(T) +P_ϕ_n^⊥( u_ϕ_n-1-u_ϕ_n). From the similar controls (<ref>) and (<ref>) and using (<ref>) for t=T, we have for N large enough, ‖V_n+1(0)‖_2≤‖ P_ϕ_n-1^⊥V_n(T) ‖_2 + O(h_n^2)≤ 2 C_PC_0 e^Tγh_n+ O(h_n^2). Recall (<ref>) and γ<0, the fact that ‖V_n+1(0)‖_2≤ h_n+1 follows then and the iteration is concluded. Thus, we have constructed a time s_2=s_1+(n_f-1)T such that, on B_N for N large enough, setting ψ_0^2:=proj(U_N(s_2)), we have ‖ U_N(s_2)-u_ψ_0^2‖_2 ≤ N^2η-1/2 and |ψ_0^2-ψ_0^1|≤ Cε_0, which gives |ψ_0^2-θ_0|≤ C'ε_0 sor some C'>0. Step c. So far, we have constructed a time s_2=C(|logε_0| + log N) for which we have dist_L^2(U_N(s_2),𝒰)≤ N^2η-1/2. We want some s_3=C̃log N ≥ s_2, C̃=C+1, independent of ε_0 such that with ψ_0^3:=proj(U_N(s_3)), ‖ U_N(s_3) - u_ψ_0^3‖≤ N^2η-1/2. For this, it suffices to decompose the dynamics on [s_2,s_3] in a same way as before in both Steps 1 and 2. This induces a drift |ψ_0^3-ψ_0^2|≤ C N^2η-1/2log(N)≤ε_0 for N large enough. This last step concludes the proof with T_0(N)=s_3. § FLUCTUATIONS ON THE MANIFOLD (PROOFS) The aim of this section is to prove Theorem <ref>. We start by giving an auxiliary lemma. There exists some C>0 such that for any g ∈ B(𝒰,ε_0), dist_L^2(g,𝒰) ≤‖ g -u_θ(g)‖_2≤ C dist_L^2(g,𝒰). Let g∈ B(𝒰,ε_0). The first inequality directly comes from the definition of dist_L^2(g,𝒰). By compactness of 𝒰, there exists some y∈𝒰 such that dist_L^2(g,𝒰) = ‖ g -y‖_2 (and y=u_θ(y)). Then ‖ g - u_θ(g)‖_2≤‖ g-y ‖_2 + ‖ u_θ(y)-u_θ(g)‖_2, and as ϕ↦ u_ϕ and θ are Lipschitz continuous (recall u_ϕ=Acos(·+ϕ) and θ is 𝒞^2 from Proposition <ref>), ‖ u_θ(y)-u_θ(g)‖_2≤Ĉ‖ g-y‖_2 for some Ĉ>0 (independent of the choice of g). §.§ Main structure of the proof of Theorem <ref> First, <ref> and Lemma <ref> give that one can find an event Ω_N such that 𝐏(Ω_N)1 and on this event sup_t∈ [T_0(N),T_f(N)]‖ U_N(t) - u_θ(U_N(t))‖_2 = O( N^η-1/2), with T_0(N)=Clog(N) and T_f(N)=Nτ_f. It remains to study the behavior of the isochron map of the process, that is θ(U_N(t)). We do a change of variables and introduce τ_0(N):= T_0(N)N, we define for any τ∈ [τ_0(N),τ_f] the rescaled process θ_N(τ)=θ( U_N(Nτ)). In the proof, we keep the notation t for the microscopic time variable, that is when t∈ [T_0(N),T_f(N)] and τ for the macroscopic time variable, when τ∈ [τ_0(N),τ_f]. Theorem <ref> relies on the following decomposition of θ_N, obtained by Itô's lemma. For any initial condition τ_0≥τ_0(N), for any τ≥τ_0, θ_N(τ) can be written as θ_N(τ)=θ_N(τ_0)+ϑ_N(τ_0,τ)+Θ_N(τ_0,τ), where sup_τ_0(N)≤τ_0≤τ≤τ_f𝐄( |ϑ_N(τ_0,τ)|)0 and Θ_N(τ_0,τ) is a real martingale with quadratic variation [ Θ_N]_τ = 1N∑_j=1^N ∫_τ_0^τΦ(x_j,θ_N(s))f(u_θ_N(s)(x_j))ds with Φ(x,θ):=4π^2 sin^2(x+θ). The proof of Proposition <ref> is postponed to Section <ref>. The remaining of the proof of Theorem <ref> is to prove the tightness of ( θ_N(t)) and to identify its limit. We apply Aldous criterion: note first that for any τ∈ [ε,τ_f], θ_N(τ)∈ S a compact set. Let (τ_N)_N be a bounded sequence of θ_N-optional times, let (h_N) be a sequence of positive constants such that h_N→ 0. From Proposition <ref>, we have θ_N(τ_N+h_N)-θ_N(τ_N) = ϑ_N(τ_N,τ_N+h_N) +Θ_N(τ_N,τ_N+h_N), where ϑ_N(τ_N,τ_N+h_N)0 and Θ_N has the quadratic variation [ Θ_N ]_τ_N+h_N= 1N∑_j=1^N ∫_τ_N^τ_N+h_NΦ(x_j,θ_N(s))f(u_θ_N(s)(x_j))ds. Using Burkholder-Davis-Gundy inequality, as Φ and f are bounded, we have that 𝐄[Θ_N(τ_N,τ_N+h_N)^2]≤ C 𝐄[ [ Θ_N]_τ_N+h_N] ≤ Ch_N for some positive constants C. We obtain then that θ_N(τ_N+h_N)-θ_N(τ_N) 0 hence the convergence in probability: for all ε>0, 𝐏( |θ_N(τ_N+h_N)-θ_N(τ_N) |> ε)0. We can then use Aldous criterion (see Theorem 16.8 of <cit.>): (τ∈ [ε_N,τ_f] ↦θ_N(τ) )_N is tight. Let τ↦θ(τ) be a limit in distribution of any subsequence of (τ↦θ_N(τ))_N (by convenience renamed θ_N) , that is θ_N θ. By Skorokhod's representation theorem, we can represent this convergence on a common probability space such that θ_N θ. Using this in (<ref>), we obtain that for any τ∈ [0,τ_f], as N goes to infinity, the quadratic variation of θ is [θ]_τ= 2π∫_0^τ∫_Ssin^2(x+θ(s))f(Acos(x+θ(s))) dx  ds=σ^2 τ, with σ defined in (<ref>). We conclude by Lévy's characterization theorem and obtain (<ref>). §.§ About the decomposition of Proposition <ref> To show (<ref>), we study (θ(U_N(t))_t∈ [T_0(N),T_f(N)]. To simplify the notations, we introduce θ_N(t):=θ(U_N(t)). Note that from the decomposition (<ref>) of U_N(t) and the definition M_N(t) in (<ref>), one can write dU_N(t)=B_N(t)dt + dM_N(t) where B_N(t):=-U_N(t)+ cos∗ f( U_N(t))+ Υ_t, with Υ_t(x)=∑_i=1^N ( 2πN∑_j=1^N cos(x_i-x_j)f(U_N,j(t-)) - ∫_Scos(x-y)f(U_N(t)(y))dy)1_B_N,i(x). The starting point is to write the semimartingale decomposition of θ(U_N(t)) from Itô formula: θ(U_N(t))= θ(U_N(t_0))+ ∫_t_0^t Dθ(U_N(s-))[-U_N(s)+ cos∗ f( U_N(s-))]ds +∫_t_0^t Dθ(U_N(s-))Υ_sds+ ∫_t_0^t Dθ(U_N(s-))[dM_N(s)] + ∑_j=1^N∫_t_0^t∫_0^∞[ θ( U_N(s-)+χ_j(s,z)) - θ(U_N(s-)) - Dθ(U_N(s-))[χ_j(s,z)]]π_j(ds,dz) =: θ(U_N(t_0))+I_1^N(t_0,t)+ I_2^N(t_0,t) + I_3^N(t_0,t) + I_4^N(t_0,t). We are going to focus on each of the terms of (<ref>), that is I_k^N(t_0,t) for k∈{1,2,3,4}. We have the following lemmas. We have sup_t_0∈ [T_0(N),T_f(N)]sup_t∈ (t_0, T_f(N))| I_1^N(t_0,t) |0 in probability. We have sup_t_0∈ [T_0(N),T_f(N)]sup_t∈ (t_0, T_f(N))| I_2^N(t_0,t) |0 in probability. For any t_0,t∈ [T_0(N),T_f(N)], t_0≤ t, we have I_3^N(t_0,t)=Θ_N(t_0,t) + J_3^N(t_0,t) where sup_s∈ (t_0,T_f(N))𝐄( | J_3^N(t_0,s) |) 0 and Θ_N is a real martingale with quadratic variation [Θ_N]_t=1N^2∑_j=1^N∫_t_0^t Φ(x_j,θ(U_N(s-))) f(u_θ(U_N(s-))(x_j))ds with Φ defined in (<ref>). We have sup_t_0 ∈ [T_0(N),T_f(N)]sup_t∈ (t_0, T_f(N))𝐄( | I_4^N(t_0,t)|) 0. The proofs of these fours lemmas are postponed to Section <ref>. Combining them, we can define some random variable J_N(t_0,t) such that sup_s∈ (t_0,T_f(N))𝐄( | J^N(t_0,s) |) 0 and for any t_0,t∈ [T_0(N),T_f(N)], t_0≤ t, θ(U_N(t)) = θ(U_N(t_0)) + J^N(t_0,t) +Θ_N(t_0,t). Recall the change of variables used to define θ in (<ref>). Define similarly ϑ_N(τ_0,τ):=J^N(Nτ_0,Nτ) and Θ_N(τ_0,τ)=Θ_N(Nτ_0,Nτ) for τ_0=t_0/N and τ=t/N. Then we have exactly shown (<ref>). §.§ Control of the terms of the decomposition For simplicity, we may write I_k(t) instead of I_k^N(t_0,t). In the following, we use the notations g_s=O(α_N) with g:s∈ I ↦ g_s ∈ L^2(S) for some time interval I and a sequence (α_N) independent of the time s when there exists some C (independent of N) such that for all x∈ S, sup_s∈ Isup_x∈ S| g_s(x) |≤ C α_N. Recall the definition of θ_N(t) in (<ref>). In the following proofs, this notation will be essentially used for t=s-, so that we write for simplicity θ_N=θ_N(s-). §.§.§ Proof of Lemma <ref> Recall that I_1(t):=∫_t_0^t Dθ(U_N(s-))[-U_N(s)+ cos∗ f( U_N(s))]ds. Define for g∈ L^2(S) 𝒱(g):=-g+cos∗ f(g). Recall that for any ϕ∈ S, ℒ(u_ϕ)=0 and D𝒱(u_ϕ)[h]=ℒ_ϕ h. Let g∈ B(𝒰,ε_0), and t↦ g_t:=ψ_t(g) defined in (<ref>), that is the flow of (<ref>) under initial condition g. Note that by definition of the isochron map θ in Proposition <ref> and the fact that 𝒰 consists of stationary solutions to (<ref>), one has that θ(ψ_t(g))=θ(ψ_0(g))=θ(g). Differentiating with respect to t (recall Proposition <ref>) gives that Dθ (g_t)[∂_t g_t]=Dθ (g_t)[-g_t+cos∗ f(g_t)]=0. Since this is for all t≥ 0, taking t=0 gives Dθ(g)[-g+cos∗ f(g)]=0. Hence for any s, Dθ(U_N(s))[𝒱(U_N(s))]=0 and as 𝒱(u_θ(U_N(s)))=0, we have I_1(t) = ∫_t_0^t Dθ(U_N(s-))[𝒱(U_N(s))]ds =∫_t_0^t (Dθ(U_N(s-))-Dθ(U_N(s)))[𝒱(U_N(s))]ds = ∫_t_0^t (Dθ(U_N(s-))-Dθ(U_N(s)))[𝒱(U_N(s))-𝒱(u_θ(U_N(s))]ds. As θ and 𝒱 are Lipschitz continuous, as from (<ref>) a jump of the process gives a.s. at most an increment of 2π/N between U_N(s-) and U_N(s), using (<ref>) there exists some C>0 (independent of N and of the time) such that I_1(t)≤ (t-t_0) ‖θ‖_lip2πN‖𝒱‖_lip‖ U_N(s-) -u_θ(U_N(s))‖_2 ≤C T_f(N)N N^η-1/2 on the event Ω_N (given by Theorem <ref>). As T_f(N) ∝ N and from the choice on η, (<ref>) follows. §.§.§ Proof of Lemma <ref> We place ourselves again on the event Ω_N (given by Theorem <ref>) on which we have (<ref>). Recall that I_2(t):=∫_t_0^t Dθ(U_N(s-))Υ_sds, where the definition of Υ is given in (<ref>). We have I_2(t)= ∫_t_0^t( Dθ (U_N(s-))-Dθ(u_θ_N))[Υ_s]ds+ ∫_t_0^t Dθ(u_θ_N)[Υ_s- Υ̃_s]ds+∫_t_0^t Dθ(u_θ_N)[Υ̃_s]ds, with Υ̃_s(x)=∑_i=1^N ( 2πN∑_j=1^N cos(x_i-x_j)f(u_θ_N(x_j)) - ∫_Scos(x-y)f(u_θ_N(y))dy)1_B_N,i. From (<ref>) and (<ref>) we have that ‖Υ_s‖_2 ≤CN for some C>0 independent of N and s, thus, for the first term of (<ref>), as done before using (<ref>), ∫_t_0^t( Dθ (U_N(s-))-Dθ(u_θ(U_N(s-))))[Υ_s]ds ≤ (t-t_0) C ‖θ‖_lip N^η-1/2CN≤CT_f(N)N N^η-1/2. For the third term of (<ref>), using (<ref>), we have Dθ(u_θ(U_N(s-)))[Υ̃_s] = ⟨ v_θ(U_N(s-)),Υ̃_s⟩_θ(U_N(s-))‖ v_θ(U_N(s-))‖_θ(U_N(s-)). As shown in (<ref>), ‖ v_θ_N‖_θ_N=A. From trigonometric formula one has ⟨ v_θ_N,Υ̃_s⟩_θ_N = ⟨ v_θ_N, 2πN∑_i,j=1^N cos(x_i-x_j)f(u_θ_N(x_j))1_B_N,i- ∫_Scos(·-y)f(u_θ_N(y))dy ⟩_θ_N = ( 2πN∑_j=1^N cos(x_j+θ_N)f(u_θ_N(x_j)))( ∑_i=1^N cos(x_i+θ_N) ⟨ v_θ_N, 1_B_N,i⟩_θ_N) + ( 2πN∑_j=1^N sin(x_j+θ_N)f(u_θ_N(x_j))) ( ∑_i=1^N sin(x_i+θ_N) ⟨ v_θ_N, 1_B_N,i⟩_θ_N) - ( ∫_S cos(y+θ_N)f(u_θ_N(y))dy) ⟨ v_θ_N,cos(·+θ_N)⟩_θ_N - ( ∫_S sin(y+θ_N)f(u_θ_N(y))dy) ⟨ v_θ_N,sin(·+θ_N)⟩_θ_N. By invariance of rotation and with Lemma <ref> we have ⟨ v_θ_N,cos(·+θ_N)⟩_θ_N = ℐ(sincos)=0, and similarly ∫_S sin(y+θ_N)f(u_θ_N(y))dy=0. We can then write (<ref>) as ⟨ v_θ_N,Υ̃_s⟩_θ_N=A_1 A_2 + A_3 A_4. From the computations (<ref>), (<ref>), (<ref>) and (<ref>) of Lemma <ref>, we obtain that ⟨ v_θ_N,Υ̃_s⟩_θ_N:= π A^2N+ o(1N). For the second term of (<ref>), we have with Lemma <ref> that ⟨ v_θ_N,sin(·+θ_N)⟩_θ_N =-Aℐ(sin^2)=-A thus ⟨ v_θ_N, Υ_s- Υ̃_s ⟩_θ_N = A_2 ( 2πN∑_j=1^N cos(x_j+θ_N) ( f(U_N(s-)(x_j))-f(u_θ_N(x_j))) ) + A_4 ( 2πN∑_j=1^N sin(x_j+θ_N) ( f(U_N(s-)(x_j))-f(u_θ_N(x_j))) ) + A ∫_S sin(y+θ_N)f(U_N(s-)(y))dy. Let us show that D_N:=2πN∑_j=1^N cos(x_j+θ_N) ( f(U_N(s-)(x_j))-f(u_θ_N(x_j))) = O( N^η-1/2). Setting u_θ_N(y):=∑_k=1^N u_θ_N(x_k)1_y∈ B_N,k, we have | D_N | = |∑_j=1^N cos(x_j+θ_N) ∫_S ( f(U_N(s-)(x_j))-f(u_θ_N(x_j)))1_y∈ B_N,jdy | ≤‖ f ‖_lip∑_j=1^N ∫_S | U_N(s-)(y)-u_θ_N(y)|1_y∈ B_N,jdy. With Cauchy–Schwarz inequality and Jensen's discrete inequality, we have | D_N | ≤ C_f ∑_j=1^N ( ∫_S | U_N(s-)(y)-u_θ_N(y)|^2 1_y∈ B_N,j dy)^1/2( ∫_S 1_y∈ B_N,jdy )^1/2 = √(2π N) C_f N∑_j=1^N ( ∫_S | U_N(s-)(y)-u_θ_N(y)|^2 1_y∈ B_N,j dy)^1/2 ≤√(2π N) C_f ( 1N∑_j=1^N ∫_S | U_N(s-)(y)-u_θ_N(y)|^2 1_y∈ B_N,j dy)^1/2 = C√(N)( 1N∫_S | U_N(s-)(y)-u_θ_N(y)|^2 dy)^1/2 = C ‖ U_N(s-)-u_θ_N‖_2 ≤ C ‖ U_N(s-)-u_θ_N‖_2 + C ‖ u_θ_N-u_θ_N‖_2, hence with (<ref>) and as ‖ u_θ_N-u_θ_N‖_2=O(1/N), we have indeed shown that D_N = O( N^η-1/2). Similarly, one can show that 2πN∑_j=1^N sin(x_j+θ_N) ( f(U_N(s-)(x_j))-f(u_θ_N(x_j)))= O( N^η-1/2). Using Lemma <ref> and as ∫_S sin(y+θ_N)f(u_θ_N(y))dy=0, we have ⟨ v_θ_N, Υ_s- Υ̃_s ⟩_θ_N = O( N^η-3/2) + A ∫_S sin(y+θ_N)( f(U_N(s-)(y)) - f(u_θ_N(y)))dy -A 2πN∑_j=1^N sin(x_j+θ_N) ( f(U_N(s-)(x_j))-f(u_θ_N(x_j))). Using Taylor's expansion, we obtain ⟨ v_θ_N, Υ_s- Υ̃_s ⟩_θ_N = o( 1N) + A Δ_N, where Δ_N = ∫_S sin(y+θ_N)f'(u_θ_N(y))(U_N(s-)(y) -u_θ_N(y)))dy - 2πN∑_j=1^N sin(x_j+θ_N)f'(u_θ_N(x_j)) ( U_N(s-)(x_j)-u_θ_N(x_j)). Define u_θ_N(y):=∑_j=1^N u_θ_N(x_j)1_B_N,j(y), we introduce it in Δ_N so that Δ_N = ∑_j=1^N ∫_B_N,j[ sin(y+θ_N)f'(u_θ_N(y))(U_N(s-)(y) -u_θ_N(y))) - sin(x_j+θ_N)f'(u_θ_N(x_j)) ( U_N(s-)(y)-u_θ_N(y))] dy = ∑_j=1^N ∫_B_N,j( U_N(s-)(y)-u_θ_N(y)) ( sin(y+θ_N)f'(u_θ_N(y))- sin(x_j+θ_N)f'(u_θ_N(x_j)) )dy + ∑_j=1^N ∫_B_N,jsin(y+θ_N)f'(u_θ_N(y)) ( u_θ_N(y) - u_θ_N(y)) dy. For the first term of Δ_N, let α_N(y):= sin(y+θ_N)f'(u_θ_N(y))- sin(x_j+θ_N)f'(u_θ_N(x_j)), one has with Cauchy–Schwarz inequality that ∑_j=1^N ∫_S ( U_N(s-)(y)-u_θ_N(y))α_N(y)dy≤∑_j=1^N ( ∫_S ( U_N(s-)(y)-u_θ_N(y))^2 1_B_N,jdy)^1/2( ∫_B_n,jα_N(y)^2dy)^1/2. As ∫_B_n,jα_N(y)^2dy ≤∫_B_n,j(y-x_j)^2dy = O(1N^3/2), for some C>0, using Jensen’s inequality ∑_j=1^N ∫_S ( U_N(s-)(y)-u_θ_N(y))α_N(y)dy ≤C√(N)1N∑_j=1^N ( ∫_S ( U_N(s-)(y)-u_θ_N(y))^2 1_B_N,jdy)^1/2 ≤C√(N)√(1N∑_j=1^N ∫_S ( U_N(s-)(y)-u_θ_N(y))^2 1_B_N,jdy) ≤C√(N)√(1N‖ U_N(s-)-u_θ_N‖_2^2). As ‖ u_θ-u_θ_N‖_2^2=O(1N^2) and with (<ref>), we obtain that the first term of (<ref>) is in O( N^η-1/2N). For the second term of Δ_N, we have ∑_j=1^N ∫_B_N,jsin(y+θ_N)f'(u_θ_N(y)) ( u_θ_N(y) - u_θ_N(y)) dy = ∑_j=1^N ∫_B_N,jsin(y+θ_N)f'(u_θ_N(y)) ( Acos(x_j+θ_N) -Acos(y+θ_N)) dy =A ∑_j=1^N ∫_B_N,jsin(y+θ_N)f'(u_θ_N(y)) sin(x_j+θ_N)(y-x_j) dy+o(1N) = A ∑_j=1^N sin(x_j+θ_N)^2 f'(u_θ_N(x_j)) ∫_B_N,j (y-x_j)dy +o(1N) =A ∑_j=1^N sin(x_j+θ_N)^2 f'(u_θ_N(x_j)) ( - 2π^2N^2) +o(1N) = - πN∫_S A sin(y+θ_N)^2 f'(u_θ_N(y))dy +o(1N)= - AπN +o(1N). Coming back to (<ref>), we have then that ⟨ v_θ_N, Υ_s- Υ̃_s ⟩_θ_N=-π A^2N+ o(1/N). This term (<ref>) cancels with the previous computation (<ref>) up to some rest of order o(1N). We obtain then (<ref>) after integrating on (t_0,t) and using T_f(N) ∝ N. §.§.§ Proof of Lemma <ref> Recall that I_3(t):=∫_t_0^t Dθ(U_N(s-))[dM_N(s)]. Recall the definition of χ_j in (<ref>) and the compensated measure π̃_j, we can re-write the term I_3^N(t_0,t) and introduce Dθ(u_θ_N): I_3(t)=∑_j=1^N ∫_t_0^t ∫_0^∞( Dθ(U_N(s-))- Dθ(u_θ_N))[χ_j(s,z)]π̃_j(ds,dz) + ∑_j=1^N ∫_t_0^t ∫_0^∞ Dθ(u_θ_N)[χ_j(s,z)]π̃_j(ds,dz). Let us focus first on Q_0(t):=∑_j=1^N ∫_t_0^t ∫_0^∞( Dθ(U_N(s-))- Dθ(u_θ_N))[χ_j(s,z)]π̃_j(ds,dz). It is a real martingale. We denote by [Q_0]_t=∑_s≤ t| Δ Q_0(t) |^2 its quadratic variation. It is computed as follows (as the (π_j)_1≤ j ≤ N are independent, there are almost surely no simultaneous jumps so that [π̃_j,π̃_j']=0 if j≠ j'): [Q_0]_t = ∑_j=1^N ∫_t_0^t ∫_0^∞( ( Dθ(U_N(s-))- Dθ(u_θ_N))[χ_j(s,z)])^2π_j(ds,dz) ≤ C ‖θ‖_lip^2 ( sup_t∈ [t_0,t]‖ U_N(s-)-u_θ_N‖_2)^2 ∑_j=1^N ∫_t_0^t ∫_0^∞‖χ_j(s,z)‖_2^2 π_j(ds,dz) ≤ C N^2η-11N^2∑_j=1^N ∫_t_0^t ∫_0^∞1_z≤λ_N,j(s)π_j(ds,dz), using (<ref>) and the computation (<ref>) for some constants C>0. Then, by Burkholder-Davis-Gundy inequality and as f is bounded 𝐄[ Q_0(t)^2 ] ≤ C 𝐄[ [Q_0]_t ]≤ C N^2η-3𝐄[∑_j=1^N ∫_t_0^t ∫_0^∞1_z≤λ_N,j(s)π_j(ds,dz)]≤ CN^2η-1T_f(N)N0, hence Q_0(t) converges in L^1 towards 0 as N→∞ uniformly in t. The other term Q(t):=∑_j=1^N ∫_t_0^t ∫_0^∞ Dθ(u_θ(U_N(s-))[χ_j(s,z)]π̃_j(ds,dz) in (<ref>) is also a real martingale, we denote by [Q]_t=∑_s≤ t| Δ Q(t) |^2 its quadratic variation and it is computed as follows: [Q]_t =∑_j=1^N∫_t_0^t ∫_0^∞( Dθ(u_θ_N)[χ_j(s,z)])^2π_j(ds,dz)=∑_j=1^N∫_t_0^t ∫_0^∞( ⟨ v_θ_N, χ_j(s,z) ⟩_θ_N‖ v_θ_N‖_θ_N)^2π_j(ds,dz), where we used (<ref>). Recall the notation w_ij^(N)=2πcos(x_i-x_j), from the computation (<ref>), ‖ v_θ_N‖_θ_N=A hence [Q]_t = 1A^2∑_j=1^N∫_t_0^t ∫_0^∞( ⟨ v_θ_N, ∑_i=1^N 1_B_N,iw_ij^(N)N1_z≤λ_N,j⟩_θ_N)^2π_j(ds,dz) = 1A^2∑_j=1^N∫_t_0^t ∫_0^∞( ∑_i=1^N w_ij^(N)N⟨ v_θ_N, 1_B_N,i⟩_θ_N)^2 1_z≤λ_N,jπ_j(ds,dz). Let us focus on the term E_N:= ∑_i=1^N w_ij^(N)N⟨ v_θ_N, 1_B_N,i⟩_θ_N. We have with trigonometric formula E_N=2πN(cos(x_j+θ_N)( ∑_i=1^N cos(x_i+θ_N) ⟨ v_θ_N, 1_B_N,i⟩_θ_N)+sin(x_j+θ_N)(∑_i=1^N sin(x_i+θ_N) ⟨ v_θ_N, 1_B_N,i⟩_θ_N) ). As ∑_i=1^N cos(x_i+θ_N) ⟨ v_θ_N, 1_B_N,i⟩_θ_N∫_S A cossin f'(Acos) =0 (by symmetry) and ∑_i=1^N sin(x_i+θ_N) ⟨ v_θ_N, 1_B_N,i⟩_θ_N-∫_S A sin^2 f'(Acos) =-A with (<ref>), we have that ∑_i=1^N w_ij^(N)N⟨ v_θ_N, 1_B_N,i⟩_θ_N∼_N→∞ - 2πN A sin(x_j+θ_N). Hence we have ( ∑_i=1^N w_ij^(N)N⟨ v_θ_N, 1_B_N,i⟩_θ_N)^2= A^2N^2Φ(x_j,θ_N) with Φ(x_j,θ_N)∼_N→∞(2πsin(x_j+θ_N))^2 (bounded independently of N, θ_N). Coming back to (<ref>), we have [Q]_t = 1N^2∑_j=1^N∫_t_0^t ∫_0^∞Φ(x_j,θ_N) 1_z≤λ_N,jπ_j(ds,dz)+o(1N). Let Q_1(t) :=1N^2∑_j=1^N∫_t_0^t ∫_0^∞Φ(x_j,θ_N) (1_z≤ f(U_N,j(s-)) -1_z≤ f(u_θ_N(x_j))) π_j(ds,dz) Q_2(t) :=1N^2∑_j=1^N∫_t_0^t ∫_0^∞Φ(x_j,θ_N) 1_z≤ f(u_θ_N(x_j))π̃_j(ds,dz) Q_3(t) :=1N^2∑_j=1^N∫_t_0^t ∫_0^∞Φ(x_j,θ_N) 1_z≤ f(u_θ_N(x_j)) dsdz, so that [Q]_t=Q_1(t)+Q_2(t)+Q_3(t)+o(1N). We have (recall that Φ is bounded) 𝐄[ | Q_1(t)|] ≤1N^2∑_j=1^N 𝐄[ ∫_t_0^t ∫_0^∞Φ(x_j,θ_N) |1_z≤ f(U_N,j(s-)) -1_z≤ f(u_θ_N(x_j))|π_j(ds,dz)] = ‖Φ‖_∞N^2∑_j=1^N ∫_t_0^t 𝐄[ | f(U_N,j(s-)) -f(u_θ_N(x_j))|] ds ≤‖Φ‖_∞‖ f ‖_lipN (t-t_0) C N^η-1/2≤ C T_f(N)N N^η-1/20, using (<ref>). About Q_2, we use once again that Q_2 is a real martingale with quadratic variation [Q_2]_t = ∑_j=1^N∫_t_0^t ∫_0^∞(1N^2Φ(x_j,θ_N) 1_z≤ f(u_θ_N(x_j)))^2 π_j(ds,dz) ≤CN^4∑_j=1^N ∫_t_0^t ∫_0 ^∞1_z≤ f(u_θ_N(x_j))π_j(ds,dz), hence with Burkholder-Davis-Gundy inequality, 𝐄[Q_2(t)^2]≤ C 𝐄[ [Q_2]_t ]≤CN^4𝐄[∑_j=1^N ∫_t_0^t ∫_0^∞1_z≤ f(u_θ_N(x_j))π_j(ds,dz)]≤CN^2T_f(N)N0. The last term Q_3(t)=1N^2∑_j=1^N∫_t_0^t Φ(x_j,θ_N) f(u_θ_N(x_j))ds gives the term Θ_N(t_0,t) in (<ref>). §.§.§ Proof of Lemma <ref> Recall that I_4(t) is defined in (<ref>). A Taylor's expansion gives that I_4(t) = ∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)D^2θ( U_N(s-) + rχ_j(s,z) ) [χ_j(s,z)]^2 dr π_j(ds,dz) = ∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)D^2θ( U_N(s-) + rχ_j(s,z) ) [χ_j(s,z)]^2 dr π_j(ds,dz) +∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)( D^2θ( U_N(s-) + rχ_j(s,z) ) -D^2 θ( U_N(s-)) )[χ_j(s,z)]^2 drdsdz +∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)( D^2θ( U_N(s-) ) -D^2 θ(u_θ_N) )[χ_j(s,z)]^2 drdsdz +∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)D^2 θ(u_θ_N)[χ_j(s,z)]^2 dr dsdz =: L_1(t) + L_2(t) + L_3(t)+L_4(t). L_1 is a real martingale and [L_1](t) = ∑_j=1^N∫_t_0^t∫_0^∞( ∫_0^1 (1-r)D^2θ( U_N(s-) + rχ_j(s,z) ) [χ_j(s,z)]^2 dr)^2 π_j(ds,dz) ≤‖ D^2 θ‖_∞^22∑_j=1^N∫_t_0^t∫_0^∞‖χ_j(s,z) ‖_2^4 π_j(ds,dz)≤CN^4 ∑_j=1^N∫_t_0^t∫_0^∞1_z≤λ_N,j(s)π_j(ds,dz). As done for Q_2 in the proof of Lemma <ref>, we obtain that 𝐄[| L_1(t)^2 |]≤CN^2T_f(N)N0. We have, using (<ref>) and the fact that f is bounded L_2(t) =∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)( D^2θ( U_N(s-) + rχ_j(s,z) ) -D^2 θ( U_N(s-)) )[χ_j(s,z)]^2 drdsdz ≤∑_j=1^N∫_t_0^t∫_0^∞‖ D^2θ‖_lip‖χ_j(s,z) ‖_2‖χ_j(s,z)‖_2^2   dsdz ≤ C ∑_j=1^N ∫_t_0^t∫_0^∞( 1N1_z≤λ_N,j(s))^3 dsdz ≤CN^3∑_j=1^N ∫_t_0^t λ_N,j(s) ds ≤C T_f(N)N^2. Similarly, using (<ref>) L_3(t) = ∑_j=1^N∫_t_0^t∫_0^∞∫_0^1 (1-r)( D^2θ( U_N(s-) ) -D^2 θ(u_θ_N) )[χ_j(s,z)]^2 drdsdz ≤12‖ D^2 θ‖_lip∑_j=1^N ∫_t_0^t∫_0^∞‖ U_N(s-)-u_θ_N‖_2‖χ_j(s,z)‖_2^2 dsdz ≤ C N^η-1/2∑_j=1^N ∫_t_0^t∫_0^∞1N^21_z≤λ_N,j(s) dsdz ≤ C T_f(N)N N^η-1/2. For L_4, we use the computation of D^2θ( u_θ_N) [χ_j(s,z)]^2 given by Lemma <ref>: for some C=C_A,γ, L_4(t) = 1/2∑_j=1^N∫_t_0^t∫_0^∞ D^2 θ(u_θ_N)[χ_j(s,z)]^2 dsdz = ∑_j=1^N∫_t_0^t∫_0^∞1_z≤λ_N,j(s)( CN^2cos(x_j+θ)sin(x_j+θ)+ O(N^-3) ) dsdz = CN^2∑_j=1^N∫_t_0^t λ_N,j(s) cos(x_j+θ_N)sin(x_j+θ_N) ds + O( T_f(N)N^2) =CN^2∑_j=1^N∫_t_0^t ( f(U_N(s-)(x_j) - f(u_θ_N(x_j)))cos(x_j+θ_N)sin(x_j+θ_N) ds + O( T_f(N)N^2) + CN^2∑_j=1^N∫_t_0^t f(u_θ_N)cos(x_j+θ_N)sin(x_j+θ_N) ds. As done before for D_N in (<ref>), 2πN∑_j=1^N ( f(U_N(s-)(x_j) - f(u_θ_N(x_j)))cos(x_j+θ_N)sin(x_j+θ_N) = O(N^η-1/2) and CN^2∑_j=1^N f(u_θ_N(x_j))cos(x_j+θ_N)sin(x_j+θ_N) = CN( ∫_S f(u_θ_N(x))cos(x+θ_N)sin(x+θ_N)dx + O(N^-1) ) = O(1N^2), hence as T_f(N)∝ N, L_4(t) = O( N^η-1/2). Combining our results on L_2, L_3, L_4, we have then shown that sup_t∈ [T_0(N),T_f(N)](L_2(t)+L_3(t)+L_4(t)) = O( N^η-1/2) 0. We conclude with (<ref>). § APPENDIX: ON THE STATIONARY SOLUTIONS TO THE NEURAL FIELD EQUATION §.§ When f is the Heaviside function Here we study the NFE equation (<ref>) and its stationary solutions (<ref>) when f=H_ϱ. We recall the results from of <cit.> and <cit.>. There exist non-zero stationary solutions to (<ref>) when f=H_ϱ, ν(dy)= 1_[-π,π)/2πdy and w(x,y)=2π cos(x-y) if and only if ϱ∈ [-1,1], and in this case, the set of stationary solutions is 𝒰_0 ∪ 𝒰_A_+(0) ∪ 𝒰_A_-(0), where A_+(0) and A_-(0) are defined in (<ref>). (following <cit.>) First, u=0 is an evident solution to (<ref>). We focus now on the other solutions. To solve (<ref>), we need to find A solving (<ref>). As Acos(x)=-Acos(x+π), 𝒰_A=𝒰_-A and we can focus on the case A>0. Let A>0 be a solution to (<ref>) with f=H_ϱ. Note that we necessarily need A≥|ϱ|, because if A<ϱ, the threshold ϱ is never reached in (<ref>) hence the unique solution is A=0 which is a contradiction (and similarly for ϱ<-A). Then as |ϱ|≤ A, Arccos (ϱ/A)∈ [0,π] is well defined and verifies Acos(y)≥ϱ⇔| y |≤Arccos (ϱ/A), hence (<ref>) becomes A=2 ∫_0^Arccos(ϱ/A)cos(y)dy = 2 sin( Arccos(ϱ/A))=2√( 1-(ϱA)^2). Equation (<ref>) has two non-negative solutions A_+(0) and A_-(0) defined in (<ref>) if and only if ϱ∈ [-1,1], which indeed verify ϱ∈ [-A,A], hence the result. §.§ When f is a sigmoid Here we prove Proposition <ref>, following the previous result when f=H_ϱ and using the fact that f_κ,ϱH_ϱ. Define the function g:ℝ× (|ϱ|,+∞) →ℝ such that {[ g(κ,a):=a-∫_-π^πcos(y) f_κ,ϱ( acos(y))dy, (κ,a)∈ℝ_+^*×(|ϱ|,+∞),; g(κ,a):=a-∫_-π^πcos(y) H_ϱ( acos(y))dy=a-2√(1-(ϱa)^2), (κ,a)∈ℝ_-× (|ϱ|,+∞). ]. As f_κ,ϱ H_ϱ, by dominated convergence, g is continuous on ℝ×(ϱ,+∞). It is differentiable on ℝ_+^*×(ϱ,+∞) and on ℝ_-^*×(ϱ,+∞), we now focus on its differentiability in (0,a) for any a∈(ϱ,+∞). We first show the continuity of dgda, that is showing lim_κ→ 0dgda(κ,a)=dgda(0,a)=1-2ϱ^2a^3 √(1-(ϱa)^2). For any κ>0, recalling the definition of f_κ,ϱ in (<ref>), dgda(κ,a) = 1 - ∫_-π^πcos(y)^2 e^-(acos(y)-ϱ)/κκ(1+e^-(acos(y)-ϱ)/κ)^2dy=1 - 2 ∫_0^πcos(y)^2 e^-(acos(y)-ϱ)/κκ(1+e^-(acos(y)-ϱ)/κ)^2dy, and by the change of variables acos(y)-ϱ=u, we get ∫_0^πcos(y)^2 e^-(acos(y)-ϱ)/κκ(1+e^-(acos(y)-ϱ)/κ)^2dy =∫_-a-ϱ^a-ϱ(u+ϱ)^2 a^3 √(1-(u+ϱ/a)^2)e^-u/κκ( 1 + e^-u/κ)^2 du = 1a^3∫_ℝ h(-u) φ_κ(u) du = 1a^3 (h ∗φ_κ)(0) with h(u):=1_(ϱ-a,a+ϱ)(u) (-u+ϱ)^2√(1-(-u+ϱ/a)^2) and φ_κ(u):=e^-u/κκ( 1 + e^-u/κ)^2. By Lemma <ref>, (h ∗φ_κ)(0)h(0)=ϱ^2√(1-(ϱa)^2) and (<ref>) follows. We show now the continuity of dgdκ, that is lim_κ→ 0dgdκ(κ,a)=0. For any κ>0, we obtain similarly dgdκ(κ,a)=2 ∫_0^πcos(y) (acos(y)-ϱ) e^-(acos(y)-ϱ)/κκ^2 ( 1+e^-(acos(y)-ϱ)/κ)^2dy= 2a^2κ∫_(-a-ϱ)/κ^(a-ϱ)/κh̃(κ v) e^-v( 1 + e^-v)^2dv with h̃(u):=u(u+ϱ)√(1-(u+ϱ/a)^2). Let F(κ):=∫_0^(a-ϱ)/κh̃(κ v) e^-v( 1 + e^-v)^2dv, by dominated convergence F(κ)h̃(0) ∫_0^∞e^-v( 1 + e^-v)^2dv = h̃(0)2=0. Setting F(0):=0, F is continuous on [0,∞) and differentiable on (0,∞) with F'(κ)=-(a-ϱ)κ^2h̃(a-ϱ)e^-(a-ϱ)/κ( 1 + e^-(a-ϱ)/κ)^2 + ∫_0^(a-ϱ)/κ vh̃'(κ v) e^-v( 1 + e^-v)^2dv. By dominated convergence, F'(κ) 0+h̃'(0)∫_0^∞ve^-v( 1 + e^-v)^2dv=h̃'(0)ln(2)=ϱln(2)√(1+(ϱ/a)^2). Hence by Taylor's theorem, F(κ) = κϱln(2)√(1+(ϱ/a)^2)+o(κ) as κ→ 0. Similarly, let G(κ):=∫_(-a-ϱ)/κ^0 h̃(κ v) e^-v( 1 + e^-v)^2dv=∫_0^(a+ϱ)/κh̃(-vκ) e^v( 1 + e^v)^2dv, we also have G(κ)→ 0. Setting G(0):=0, G is differentiable on (0,∞) with G'(κ)=-a+ϱκ^2h̃(-a-ϱ)e^(a+ϱ)/κ( 1 + e^(a+ϱ)/κ)^2 -∫_0^(a+ϱ)/κ vh̃'(κ v)e^v( 1 + e^v)^2dv -h̃'(0) ln(2). Hence by Taylor's theorem G(κ)=-κϱln(2)√(1+(ϱ/a)^2) + o(κ) as κ→ 0. We obtain then dgdκ(κ,a)= 2a^2κ( F(κ) + G(κ)) = 2a^2κ( κh̃'(0)ln(2) - κh̃'(0)ln(2) + o(κ))=o(1), hence (<ref>) is true. We have shown that g is indeed 𝒞^1 on ℝ× (|ϱ|,+∞). Our aim is to apply the implicit function theorem. With Proposition <ref>, we have that g(0,A_+(0))=0. Let us show that dgda(0,A_+(0)) ≠ 0. Using (<ref>), we obtain dgda(0,A_+(0))= 1-2ϱ^22(1 + √(1-ϱ^2))√(2 + 2√(1-ϱ^2) -ϱ^2), we then need ϱ^2≠(1+ √(1-ϱ^2))√(2 + 2√(1-ϱ^2) -ϱ^2), which is true if and only if ϱ≠ 1. We conclude by implicit function theorem. It remains now to prove that there exists κ_1>0 such that for any κ∈ (0,κ_1), I(1,κ)=∫_S f_κ,ϱ'(A(κ)cos(x))dx∈ (1,2). We have ℐ(1,κ) =2 ∫_0^πe^-(A(κ)cos(y)-ϱ)/κκ(1+e^-(A(κ)cos(y)-ϱ)/κ)^2dy =2∫_-A(κ)-ϱ^A(κ)-ϱ1A(κ)√(1-(u+ϱ/A(κ))^2) e^- u/κκ(1+e^- u/κ)^2= h∗ϕ_κ(0)2A_+(0)√(1-ϱ^2/A_+(0)^2), with h(u)=1_(ϱ-A(κ), A(κ)+ϱ)(u)2A(κ)√(1-(-u+ϱ/A(κ))^2) and using Lemma <ref> and as A(κ) A_+(0) defined in (<ref>). As 1A_+(0)√(1-ϱ^2/A_+(0)^2)=1√(2 + 2√(1-ϱ^2)-ϱ^2)<1 when ϱ∈ (-1,1), by continuity of κ↦ A(κ) there exists κ_1>0 such that for κ<κ_1, we have indeed I(1,κ)<2. Let us show know that for small κ we have also I(1,κ)>1. We have I(1,0)-1=2√(2+2√(1-ϱ^2)-ϱ^2)-1=2-√(2+2√(1-ϱ^2)-ϱ^2)√(2+2√(1-ϱ^2)-ϱ^2), and as 2√(1-ϱ^2)-ϱ^2<2 we have indeed I(1,0)-1>0. Similarly by continuity it implies that I(1,κ)>1 for κ small enough. § APPENDIX: SOME COMPUTATIONS §.§ Control of the noise perturbation We prove here Proposition <ref>, which is a part of the Step 2 of the proof of Theorem <ref> in Section <ref>. The proof relies on a adaptation of an argument given in <cit.> (Theorem 4.3), where a similar quantity to the following (<ref>) is considered for N=1, and used in the proof of Proposition 4.2 of <cit.>. Recall the expression of (Z_N,j)_1≤ j ≤ N in (<ref>). Introduce the compensated measure π̃_j(ds,dz):=π_j(ds,dz)-λ_N,jdsdz, so that with the linearity of (e^tℒ_ϕ_ n-1)_t≥ 0, we obtain that ζ_n can be written as ζ_n(t) = ∑_j=1^N ∫_0^t∫_0^∞ e^(t-s)ℒ_ϕ_n-1χ_j(s,z) π̃_j(ds,dz), with χ_j(s,z):=( ∑_i=1^N 1_B_N,iw_ij^(N)N1_z≤λ_N,j(s)). Fix m≥ 1. The functional ϕ:L^2(I)→ℝ given by ϕ(v)=‖ v ‖_2^2m is of class 𝒞^2 (recall that ζ_n(t) ∈ L^2(I)) so that by Itô formula on the expression (<ref>) we obtain ϕ(ζ_n(t)) = ∫_0^t ϕ'(ζ_n(s)) ℒ_ϕ_n-1(ζ_n(s))ds + ∑_j=1^N ∫_0^t ∫_0^∞ϕ'(ζ_n(s-))χ_j(s,z)π̃_j(ds,dz) + ∑_j=1^N∫_0^t∫_0^∞[ ϕ( ζ_n(s-)+χ_j(s,z)) - ϕ(ζ_n(s-)) - ϕ'(ζ_n(s-))χ_j(s,z)]π_j(ds,dz) := I_0(t) + I_1(t) + I_2(t). We also have that for any v,h,k∈ L^2(I), ϕ'(v)h=2m‖ v ‖_2^2m-2(⟨ v,h⟩)∈ℝ and ϕ”(v)(h,k)=2m(2m-1)‖ v ‖_2^2m-4⟨ v,k ⟩⟨ v,h ⟩ +2m‖ v ‖^2m-2⟨ h,k⟩. We have I_0(t)= ∫_0^t 2m ‖ζ_N(s) ‖_2^2m-2( ⟨ζ_N(s),ℒ(ζ_N(s))⟩)ds. From Proposition <ref>, ℒ_ϕ_n-1 has only three non-positive eigenvalues hence by Lumer-Philipps Theorem (see Section 1.4 of <cit.>), ( ⟨ζ_n(s),ℒ_ϕ_n-1(ζ_Nns)))⟩≤ 0. Then for any t≥ 0, I_0(t)≤ 0. Let some ε>0 to be chosen later. About I_1, using Burkholder-Davis-Gundy inequality, the independence of the family (π_j) and Hölder inequality with some well chosen parameter, one can show 𝐄[ sup_s≤ T|I_1(s)|] ≤ C (2m-1) ε𝐄[ sup_0≤ s ≤ T( ‖ζ_n(s)‖_2^2m) ] + C ε^-(2m-1)𝐄[(∑_j=1^N∫_0^T∫_0^∞‖χ_j(s,z)‖_2^2 π_j(ds,dz))^m], as done for Proposition 4.2 of <cit.>, with C some deterministic constant. About I_2, using Taylor's Lagrange formula, Hölder and Young's inequalities, one can show 𝐄[ sup_s≤ T|I_2(s)|] ≤ m(2m-2) ε𝐄[sup_0≤ s ≤ t( ‖ζ_n(s)‖_2^2m)] + 2m ε^-(2m-2)𝐄[(∑_j=1^N∫_0^t∫_0^∞‖χ_j(s,z)‖_2^2 π_j(ds,dz))^m]. Taking the expectation in (<ref>) and fixing ε such that ε( C(2m-1)+m(2m-2) ) ≤1/2 , we get 𝐄[sup_s≤ T‖ζ_n(s) ‖_2^2m] ≤ 2 C𝐄[(∑_j=1^N∫_0^T∫_0^∞‖χ_j(s,z)‖_2^2 π_j(ds,dz))^m], where C>0 depends only on m. As sup_i,jw_ij^(N)≤ 2π, ‖χ_j(s,z)‖_2^2 = 1N∑_i=1^N(w_ij^(N)N)^2 1_z≤λ_N,j(s)≤4π^2N^21_z≤λ_N,j(s). As f is bounded by 1, we have that 𝐄[sup_s≤ T‖ζ_n(s) ‖_2^2m] ≤CN^m𝐄[1N∑_j=1^N Z_j(T)^m], where (Z_j(t)) are i.i.d copies of a Poisson process on intensity 1. Hence for some constant C=C(T,m,κ,ϱ)>0, for any 1≤ n ≤ n_f, 𝐄[ sup_0≤ t ≤ T‖ζ_n(t) ‖_2^2m]≤CN^m. It implies 𝐏( sup_t∈[0,T]‖ζ_n(t) ‖_2 ≥N^η√(N)) ≤𝐄[ sup_0≤ t ≤ T‖ζ_n(t) ‖_2^2m] N^2η m N^m ≤ C N^-2mη, hence by a union bound 𝐏(A_N^C)≤ C n_f N^-2mη=CN^α-2mη. We can then choose m large enough to obtain the result of Proposition <ref>. §.§ Analysis complements Define φ(u)=e^-u(1+e^-u)^2. For any κ>0, let φ_κ(u):=1/κφ(u/κ). Then (φ_κ)_κ>0 is an approximate identity and φ_κ∗ h h for any h∈ L^p, with 1≤ p < ∞. It suffices to check that ∫_ℝφ(u)du= ∫_ℝe^-u(1+e^-u)^2du= [11+e^-u)]_-∞^+∞=1. Let N≥ 1, recall that S=[-π,π) and its regular subdivision x_i=2iπN-π for 0≤ i ≤ N. For any function g∈𝒞^2(I,ℝ), we have 2πN∑_j=1^N g(x_j) = ∫_S g(y)dy - 12( 2πN)^2 ∑_j=1^N g'(x_j) + o(1N). Moreover, for any function h∈𝒞^1(I,ℝ), we have ∑_i=1^N h(x_i) ∫_x_i-1^x_i g(y)dy = ∫_S h(x)g(x)dx - ∑_i=1^N h'(x_i) ∫_x_i-1^x_i (y-x_i)g(y)dy+ o(1N). Let C_j=(x_j-1,x_j) for 1≤ j≤ N. From Taylor's expansion, g(y)= g(x_j)+g'(x_j)(y-x_j)+∫_x_j^yg”(t)(y-t)dt hence the result (<ref>) as ∫_S g(y)dy=∑_j=1^N ∫_C_j g(y)dy. About (<ref>), we proceed similarly as ∫_S hg= ∑_j∫_C_jg(y) ( h(x_j)+h'(x_i)(y-x_j)+∫_x_j^y h”(t)(y-t) dt)dy. §.§ Auxilliary lemmas §.§.§ About the derivatives of the isochron Let ϕ∈ S. There exists C=C_A,γ such that D^2θ(u_ϕ)[χ_j(s,z)]^2= 1_ z≤λ_ N, j(s)( C N^ 2cos(x_ j+ ϕ) sin(x_ j+ ϕ) + O(N^ -3)), where the notation O(N^-3) is uniform in (s,z,ϕ). Recall (<ref>), we have D^2θ(u_ϕ)[χ_j(s,z)]^2= 12A^2( 2α_ϕ^∘(χ_j(s,z))β_ϕ(v_ϕ,χ_j(s,z))+β_ϕ(χ_j(s,z),χ_j(s,z))) + 1+ γ/ A^ 2(1- γ)α_ϕ^γ(χ_j(s,z))β_ϕ(u_ϕ,χ_j(s,z)) - (2- γ)(1+ γ)/ 2(1-γ)( α_ϕ^∘(χ_j(s,z))^2+α_ϕ^γ(χ_j(s,z))^2), Let us compute each term. About α, using some trigonometric formula and Lemma <ref> we have α_ϕ^∘(χ_j(s,z)) = ⟨χ_j, v_ϕ⟩_ϕA=1A∫_S χ_j v_ϕ f'(u_ϕ)= 2πA1_z≤λ_N,j(s)∑_i=1^N cos(x_i-x_j)N∫_S v_ϕ f'(u_ϕ) 1_B_N,i = 2πAN1_z≤λ_N,j(s)( cos(x_j+ϕ) ∑_i=1^N cos(x_i+ϕ) ∫_S v_ϕ f'(u_ϕ) 1_B_N,i + sin(x_j+ϕ)∑_i=1^N sin(x_i+ϕ)∫_S v_ϕ f'(u_ϕ) 1_B_N,i) = 2πAN1_z≤λ_N,j(s)( cos(x_j+ϕ) ∫_Scos(x+ϕ) v_ϕ(x) f'(u_ϕ(x))dx + sin(x_j+ϕ)∫_S sin(x+ϕ) v_ϕ(x) f'(u_ϕ(x))dx + O(N^-1)) = 2πAN1_z≤λ_N,j(s)sin(x_j+ϕ)∫_S sin(x+ϕ) v_ϕ(x) f'(u_ϕ(x))dx + 1_z≤λ_N,j(s)O(N^-2) = 1_z≤λ_N,j(s)(- 2πNsin(x_j+ϕ) ℐ(sin^2) +O(N^-2))= 1_z≤λ_N,j(s)(- 2πNsin(x_j+ϕ) +O(N^-2)), using Lemma <ref>. We prove in a same way that α_ϕ^γ( χ_ j(s,z))= 1_ z≤λ_ N, j(s)( 2 π‖ u_ϕ‖_ϕ Ncos(x_ j+ ϕ) (ℐ(1)-1)+ O(N^ -2)). About β, we have similarly using Lemma <ref> that β_ϕ(v_ϕ,χ_j(s,z)) = ∫_Sf”(u_ϕ(y))v_ϕ(y)^2 χ_j(s,z)(y)dy = ∑_i=1^N w_ij^(N)N1_z≤λ_N,j(s)∫_S f”(u_ϕ(y))v_ϕ(y)^2 1_B_N,i(y)dy = 1_z≤λ_N,j(s)2πN( cos(x_j+ϕ)∑_i=1^N cos(x_i+ϕ)∫_S f”(u_ϕ(y))v_ϕ(y)^2 1_B_N,i(y)dy . .+ sin(x_j+ϕ)∑_i=1^N sin(x_i+ϕ) ∫_S f”(u_ϕ(y))v_ϕ(y)^2 1_B_N,i(y)dy ) =1_z≤λ_N,j(s)2πN( cos(x_j+ϕ) ∫_S cos(y+ϕ) f”(u_ϕ(y))v_ϕ(y)^2dy . .+ sin(x_j+ϕ) ∫_S sin(y+ϕ)f”(u_ϕ(y))v_ϕ(y)^2dy+O( 1N) ) = 1_z≤λ_N,j(s)2πN( cos(x_j+ϕ) ∫_S cos(y+ϕ) f”(u_ϕ(y))v_ϕ(y)^2dy+O( 1N) ). With Lemma <ref> and an integration by parts, we obtain ∫_S cos(y+ϕ) f”(u_ϕ(y))v_ϕ(y)^2dy = A^2 ∫_S cos(y+ϕ) f”(Acos(y+ϕ))sin^2(y+ϕ)dy = ∫_S ( -Asin(y) f”(Acos(y))( -Asin(y)cos(y))dy = - ∫_S f'(Acos(y))( -A + 2Asin^2)dy = A( ℐ(1) - 2ℐ(sin^2))=Aγ recalling (<ref>), hence β_ϕ(v_ϕ,χ_j(s,z)) = 1_z≤λ_N,j(s)(2πN Aγcos(x_j+ϕ) + O(N^-2)). We prove in a same way that β_ϕ(u_ϕ, χ_ j(s,z))= - 1_ z≤λ_ N, j(s)( 2 π NA γsin(x_ j+ ϕ) +O(N^ -2)). Finally we have β_ϕ(χ_j(s,z),χ_j(s,z))= ∫_S f”(u_ϕ(y))v_ϕ(y) (∑_i=1^N 1_B_N,i(y) w_ij^(N)N1_z≤λ_N,j(s))^2dy = 1_z≤λ_N,j(s)( 2πN)^2 ∑_i=1^N ( cos(x_i+ϕ)cos(x_j+ϕ) + sin(x_i+ϕ)sin(x_j+ϕ))^2 ∫_B_N,i(y) f”(u_ϕ(y))v_ϕ(y) dy =1_z≤λ_N,j(s)( 2πN)^2 ( cos(x_j+ϕ)^2 ∫_S cos(y+ϕ)^2 f”(u_ϕ(y))v_ϕ(y) dy . .+ sin(x_j+ϕ)^2 ∫_S sin(y+ϕ)^2 f”(u_ϕ(y))v_ϕ(y) dy . . +2 cos(x_j+ϕ)sin(x_j+ϕ) ∫_S cos(y+ϕ)sin(y+ϕ) f”(u_ϕ(y))v_ϕ(y) dy+ O(N^-1)) =1_z≤λ_N,j(s)[( 2πN)^2 2 cos(x_j+ϕ)sin(x_j+ϕ) ∫_S cos(y+ϕ)sin(y+ϕ) f”(u_ϕ(y))v_ϕ(y) dy+O(N^-3)]. With an integration by parts and recognising (<ref>), ∫_S cos(y+ϕ)sin(y+ϕ) f”(u_ϕ(y))v_ϕ(y) dy =-A ∫_S cos(y)sin(y) f”(Acos(y))sin(y) dy = ∫_S ( -Asin(y) f”(Acos(y)) ) ( cos(y)sin(y)) dy= - γ, we obtain that β_ϕ(χ_j(s,z),χ_j(s,z)) = 1_z≤λ_N,j(s)( - 2 γ( 2πN)^2 cos(x_j+ϕ)sin(x_j+ϕ) + O(N^-3)). Putting all the previous estimates together in (<ref>), we obtain (<ref>) for some constant C=C_ A, γ. §.§.§ About the fluctuations Let ϕ∈ S. Recall the definitions of u_ϕ and v_ϕ in (<ref>) and (<ref>). We have A_1 := 2πN∑_j=1^N cos(x_j+ϕ)f(u_ϕ(x_j))=A+ o(1/N) A_2 := ∑_i=1^N cos(x_i+ϕ) ⟨ v_ϕ, 1_B_N,i⟩_ϕ = AπN+ o(1/N) A_3 := 2πN∑_j=1^N sin(x_j+ϕ)f(u_ϕ(x_j)) = o(1/N) A_4 := ∑_i=1^N sin(x_i+ϕ) ⟨ v_ϕ, 1_B_N,i⟩_ϕ = -A + o(1/N), where the notation o(1/N) is uniform in the choice of ϕ. From Lemma <ref>, more especially (<ref>) applied to g(y)=cos(y+ϕ)f(u_ϕ(y)), we have that A_1 = ∫_S cos(x+ϕ)f(u_ϕ(x))dx + 2π^2N^2∑_j=1^N ( sin(x_j+ϕ)f(u_ϕ(x_j))-cos(x_j+ϕ)f'(u_ϕ(x_j))v_ϕ(x_j)) + o(1/N) = A +2π^2N^2∑_j=1^N ( sin(x_j+ϕ)f(u_ϕ(x_j))-cos(x_j+ϕ)f'(u_ϕ(x_j))v_ϕ(x_j)) + o(1/N) = A+ o(1/N), using (<ref>) and as 2πN∑_j=1^N ( sin(x_j+ϕ)f(u_ϕ(x_j))-cos(x_j+ϕ)f'(u_ϕ(x_j))v_ϕ(x_j)) = ∫_S sin (y+ϕ)f(Acos(y+ϕ)dx +A ∫_S cos(y+ϕ) f'(Acos(y+ϕ)) sin(y+ϕ) + O(1/N) = O(1/N). Similarly we can prove (<ref>) as A_3 = ∫_S sin(x+ϕ)f(u_ϕ(x))dx - 2π^2N^2∑_j=1^N ( cos(x_j+ϕ)f(u_ϕ(x_j))+sin(x_j+ϕ)f'(u_ϕ(x_j))v_ϕ(x_j)) + o(1/N) = - 2π^2N^2∑_j=1^N ( cos(x_j+ϕ)f(u_ϕ(x_j))+sin(x_j+ϕ)f'(u_ϕ(x_j))v_ϕ(x_j)) + o(1/N)= o(1/N), using that ∫_S sin(x+ϕ)f(u_ϕ(x))dx=0 by symmetry and 2πN∑_j=1^N ( cos(x_j+ϕ)f(u_ϕ(x_j))+sin(x_j+ϕ)f'(u_ϕ(x_j))v_ϕ(x_j)) = ∫_S ( cos f(Acos) - sin f'(Acos) Asin) + O(1/N)=A-A+O(1/N)=O(1/N). From Lemma <ref>, more especially (<ref>) applied to g(y)=v_ϕ(y)f'(u_ϕ(y)) and h(x)=cos(x+ϕ), we have that A_2 = ∑_i=1^N cos(x_i+ϕ) ⟨ v_ϕ, 1_B_N,i⟩_ϕ= ∑_i=1^N cos(x_i+ϕ) ∫_B_N,i v_ϕ(y) f'(u_ϕ(y)) dy = ∫_S cos(x+ϕ)v_ϕ(x) f'(u_ϕ(x))dx + ∑_i=1^N sin(x_i+ϕ) ∫_B_N,i(y-x_i)v_ϕ(y)f'(u_ϕ(y))dy+o(1/N) =∑_i=1^N sin(x_i+ϕ) ∫_B_N,i(y-x_i)v_ϕ(y)f'(u_ϕ(y))dy+o(1/N) = ∑_i=1^N sin(x_i+ϕ)v_ϕ(x_i)f'(u_ϕ(x_i)) ∫_B_N,i(y-x_i)dy+o(1/N) =- ∑_i=1^N sin(x_i+ϕ) v_ϕ(x_i)f'(u_ϕ(x_i)) 1/2(2πN)^2+o(1/N) =πN(A ∫sin(x+ϕ)^2 f'(Acos(x+ϕ))dx + O(1/N) )+o(1/N)= AπN+ o(1/N) and similarly, for the choice h(x)=sin(x+ϕ) and using (<ref>) A_4 = ∑_i=1^N sin(x_i+ϕ) ⟨ v_ϕ, 1_B_N,i⟩_ϕ= ∑_i=1^N sin(x_i+ϕ) ∫_B_N,i v_ϕ(y) f'(u_ϕ(y)) dy = ∫_S sin(x+ϕ)v_ϕ(x) f'(u_ϕ(x))dx - ∑_i=1^N cos(x_i+ϕ) ∫_B_N,i(y-x_i)v_ϕ(y)f'(u_ϕ(y))dy+o(1/N) =-A - ∑_i=1^N cos(x_i+ϕ) ∫_B_N,i(y-x_i)v_ϕ(y)f'(u_ϕ(y))dy+o(1/N) =-A +A ∑_i=1^N cos(x_i+ϕ) ∫_B_N,i(y-x_i)sin(y+ϕ)f'(u_ϕ(y))dy+o(1/N). As ∑_i=1^N cos(x_i+ϕ) ∫_B_N,i(y-x_i)sin(y+ϕ)f'(u_ϕ(y))dy =∑_i=1^N cos(x_i+ϕ) sin(x_i+ϕ)f'(u_ϕ(x_i))∫_B_N,i(y-x_i)dy+O(1/N^2) = -πN2πN∑_i=1^N cos(x_i+ϕ) sin(x_i+ϕ)f'(u_ϕ(x_i))+o(1/N) = -πN∫_Scos(x+ϕ) sin(x+ϕ)f'(u_ϕ(x))dx +O(1/N^2)+o(1/N)= o(1/N), we obtain (<ref>). abbrv 10 AdamsMacLaurin2022arxiv Z. P. Adams and J. MacLaurin. The Isochronal Phase of Stochastic PDE and Integral Equations: Metastability and Other Properties, 2022. arXiv:2210.10681. agathenerine_longtime_arxiv Z. Agathe-Nerine. Long-term stability of interacting Hawkes processes on random graphs, 2022. arXiv:2207.13942. agathenerine2021multivariate Z. Agathe-Nerine. Multivariate Hawkes processes on inhomogeneous random graphs. Stochastic Process. Appl., 152:86–148, 2022. Amari1977 S.-I. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biological Cybernetics, 27(2):77–87, 1977. Baladron2012 J. Baladron, D. Fasoli, O. Faugeras, and J. Touboul. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. The Journal of Mathematical Neuroscience, 2(1):10, 2012. bertini14 L. Bertini, G. Giacomin, and C. Poquet. Synchronization and random long time dynamics for mean-field plane rotators. Probab. Theory Related Fields, 160(3-4):593–653, 2014. billingsley P. Billingsley. Convergence of Probability Measures. Wiley, New York, 1968. Bolley:2013 F. Bolley, I. Gentil, and A. Guillin. Uniform convergence to equilibrium for granular media. Arch. Ration. Mech. Anal., 208(2):429–445, 2013. bonnet_21 A. Bonnet, M. Martinez Herrera, and M. Sangnier. Maximum likelihood estimation for Hawkes processes with self-excitation or inhibition. Statist. Probab. Lett., 179:Paper No. 109214, 7, 2021. Bosking1997 W. H. Bosking, Y. Zhang, B. Schofield, and D. Fitzpatrick. Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. The Journal of Neuroscience, 17(6):2112–2127, Mar. 1997. bremaud1996stability P. Brémaud and L. Massoulié. Stability of nonlinear Hawkes processes. The Annals of Probability, pages 1563–1588, 1996. bressloff_waves_2014 P. C. Bressloff. Waves in neural media. Lecture Notes on Mathematical Modelling in the Life Sciences. Springer, New York, 2014. From single neurons to neural fields. bressloff_webber_12_front P. C. Bressloff and M. A. Webber. Front propagation in stochastic neural fields. SIAM J. Appl. Dyn. Syst., 11(2):708–740, 2012. CCC2022 P. Cattiaux, L. Colombani, and M. Costa. Limit theorems for Hawkes processes including inhibition. Stochastic Process. Appl., 149:404–426, 2022. Chevallier2017 J. Chevallier. Mean-field limit of generalized Hawkes processes. Stochastic Process. Appl., 127(12):3870–3912, 2017. CHEVALLIER20191 J. Chevallier, A. Duarte, E. Löcherbach, and G. Ost. Mean field limits for nonlinear spatially extended Hawkes processes with exponential memory kernels. Stochastic Processes and their Applications, 129(1):1 – 27, 2019. chevallierMT2021 J. Chevallier, A. Melnykova, and I. Tubikanec. Diffusion approximation of multi-class Hawkes processes: theoretical and numerical analysis. Adv. in Appl. Probab., 53(3):716–756, 2021. ChevallierOst2020 J. Chevallier and G. Ost. Fluctuations for spatially extended Hawkes processes. Stochastic Process. Appl., 130(9):5510–5542, 2020. kilpatrick2022arxiv H. L. Cihak, T. L. Eissa, and Z. P. Kilpatrick. Distinct excitatory and inhibitory bump wandering in a stochastic neural field, 2022. arXiv:2203.02438. Colombani2022 L. Colombani and P. L. Bris. Chaos propagation in mean field networks of FitzHugh-Nagumo neurons, 2022. arXiv:2206.13291. Coppini2022 F. Coppini. Long time dynamics for interacting oscillators on graphs. Ann. Appl. Probab., 32(1):360–391, 2022. Cormier2020 Q. Cormier, E. Tanré, and R. Veltz. Long time behavior of a mean-field model of interacting neurons. Stochastic Processes and their Applications, 130(5):2553–2595, May 2020. Costa2020 M. Costa, C. Graham, L. Marsalle, and V. C. Tran. Renewal in Hawkes processes with self-excitation and inhibition. Advances in Applied Probability, 52(3):879–915, Sept. 2020. davydov2022propagation M. Davydov. Propagation of chaos and poisson hypothesis for replica mean-field models of intensity-based neural networks, 2022. arXiv:2211.11490. Delarue2015 F. Delarue, J. Inglis, S. Rubenthaler, and E. Tanré. Global solvability of a networked integrate-and-fire model of McKean–Vlasov type. The Annals of Applied Probability, 25(4):2096–2133, Aug. 2015. Delarue2021 F. Delarue and A. Tse. Uniform in time weak propagation of chaos on the torus, 2021. delattre2016 S. Delattre, N. Fournier, and M. Hoffmann. Hawkes processes on large networks. Ann. Appl. Probab., 26(1):216–261, 02 2016. Ditlevsen2017 S. Ditlevsen and E. Löcherbach. Multi-class oscillating systems of interacting neurons. Stochastic Process. Appl., 127(6):1840–1869, 2017. Duarte2016 A. Duarte, E. Löcherbach, and G. Ost. Stability, convergence to equilibrium and simulation of non-linear Hawkes processes with memory kernels given by the sum of Erlang kernels. ESAIM Probab. Stat., 23:770–796, 2019. duval_lucon_pouzat2022 C. Duval, E. Luçon, and C. Pouzat. Interacting Hawkes processes with multiplicative inhibition. Stochastic Process. Appl., 148:180–226, 2022. ermentrout_mcleod93 G. B. Ermentrout and J. B. McLeod. Existence and uniqueness of travelling waves for a neural network. Proc. Roy. Soc. Edinburgh Sect. A, 123(3):461–478, 1993. erny2023annealed X. Erny. Annealed limit for a diffusive disordered mean-field model with random jumps, 2023. arXiv:2210.13128. ethier_kurtz1986 S. N. Ethier and T. G. Kurtz, editors. Markov Processes. John Wiley & Sons, Inc., Mar. 1986. faugeras_inglis_SNFE2015 O. Faugeras and J. Inglis. Stochastic neural field equations: a rigorous footing. J. Math. Biol., 71(2):259–300, 2015. Georgopoulos1982 A. Georgopoulos, J. Kalaska, R. Caminiti, and J. Massey. On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. The Journal of Neuroscience, 2(11):1527–1537, Nov. 1982. Poquet_GB_L14 G. Giacomin, E. Luçon, and C. Poquet. Coherence stability and effect of random natural frequencies in populations of coupled oscillators. J. Dynam. Differential Equations, 26(2):333–367, 2014. giacomin2012 G. Giacomin, K. Pakdaman, X. Pellegrin, and C. Poquet. Transitions in Active Rotator Systems: Invariant Hyperbolic Manifold Approach. SIAM Journal on Mathematical Analysis, 44(6):4165–4194, 2012. giacomin_poquet2015 G. Giacomin and C. Poquet. Noise, interaction, nonlinear dynamics and the origin of rhythmic behaviors. Brazilian Journal of Probability and Statistics, 29(2):460 – 493, 2015. Giacomin2018 G. Giacomin, C. Poquet, and A. Shapira. Small noise and long time phase diffusion in stochastic limit cycle oscillators. J. Differential Equations, 264(2):1019–1049, 2018. guckenheimer1974 J. Guckenheimer. Isochrons and phaseless sets. J. Math. Biol., 1(3):259–273, 1975. HAWKES1971 A. G. Hawkes. Point spectra of some self-exciting and mutually exciting point processes. Biometrika, 58(1):83–90, 1971. Heesen2021 S. Heesen and W. Stannat. Fluctuation limits for mean-field interacting nonlinear Hawkes processes. Stochastic Process. Appl., 139:280–297, 2021. inglis_mclaurin_16 J. Inglis and J. MacLaurin. A general framework for stochastic traveling waves and patterns, with application to neural field equations. SIAM J. Appl. Dyn. Syst., 15(1):195–234, 2016. kilpatrick2013 Z. P. Kilpatrick and B. Ermentrout. Wandering bumps in stochastic neural fields. SIAM J. Appl. Dyn. Syst., 12(1):61–94, 2013. kruger_stannat_14_front J. Krüger and W. Stannat. Front propagation in stochastic neural fields: a rigorous mathematical framework. SIAM J. Appl. Dyn. Syst., 13(3):1293–1310, 2014. kuramoto75 Y. Kuramoto. Self-entrainment of a population of coupled non-linear oscillators. In International Symposium on Mathematical Problems in Theoretical Physics (Kyoto Univ., Kyoto, 1975),, Lecture Notes in Phys., 39., pages 420–422. ,, 1975. lang_stannat2016 E. Lang and W. Stannat. L^2-stability of traveling wave solutions to nonlocal evolution equations. J. Differential Equations, 261(8):4275–4297, 2016. lapique1907 L. Lapique. Recherches quantitatives sur l'excitation électrique des nerfs traitée comme une polarization. Journal of Physiology and Pathololgy, 9:620–635, 1907. lucon-poquet2021 E. Luçon and C. Poquet. Periodicity and longtime diffusion for mean field systems in ℝ^d, 2021. arXiv:2107.02473. Luon2016 E. Luçon and W. Stannat. Transition from gaussian to non-gaussian fluctuations for mean-field diffusions in spatial interaction. The Annals of Applied Probability, 26(6):3840–3909, Dec. 2016. lucon_poquet2017 E. Luçon and C. Poquet. Long time dynamics and disorder-induced traveling waves in the stochastic Kuramoto model. Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 53(3):1196 – 1240, 2017. maclaurin_2023 J. MacLaurin. Phase Reduction of Waves, Patterns, and Oscillations Subject to Spatially Extended Noise. SIAM J. Appl. Math., 83(3):1215–1244, 2023. MacLaurinBressloff2020 J. N. MacLaurin and P. C. Bressloff. Wandering bumps in a stochastic neural field: a variational approach. Phys. D, 406:132403, 9, 2020. DeMasi2014 A. D. Masi, A. Galves, E. Löcherbach, and E. Presutti. Hydrodynamic limit for interacting neurons. Journal of Statistical Physics, 158(4):866–902, Nov. 2014. Ogata1988 Y. Ogata. Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association, 83(401):9–27, Mar. 1988. Pazy1974 A. Pazy. Semi-groups of linear operators and applications to partial differential equations. University of Maryland, Department of Mathematics, College Park, Md., 1974. Department of Mathematics, University of Maryland, Lecture Note, No. 10. Pfaffelhuber2022 P. Pfaffelhuber, S. Rotter, and J. Stiefel. Mean-field limits for non-linear Hawkes processes with excitation and inhibition. Stochastic Process. Appl., 153:57–78, 2022. Prodhomme_2023 A. Prodhomme. Strong Gaussian approximation of metastable density-dependent Markov chains on large time scales. Stochastic Process. Appl., 160:218–264, 2023. raad2020stability M. B. Raad and E. Löcherbach. Stability for Hawkes processes with inhibition. Electron. Commun. Probab., 25:Paper No. 33, 9, 2020. Shriki2003 O. Shriki, D. Hansel, and H. Sompolinsky. Rate models for conductance-based cortical neuronal networks. Neural Computation, 15(8):1809–1841, Aug. 2003. Sznitman1989 A.-S. Sznitman. Topics in propagation of chaos. In P.-L. Hennequin, editor, Ecole d'Eté de Probabilités de Saint-Flour XIX — 1989, pages 165–251, Berlin, Heidelberg, 1991. Springer Berlin Heidelberg. Touboul2014 J. Touboul. Propagation of chaos in neural fields. The Annals of Applied Probability, 24(3):1298–1328, June 2014. veltzFaugeras2010 R. Veltz and O. Faugeras. Local/global analysis of the stationary solutions of some neural field equations. SIAM J. Appl. Dyn. Syst., 9(3):954–998, 2010. Wilson1972 H. R. Wilson and J. D. Cowan. Excitatory and inhibitory interactions in localized populations of model neurons. Biophysical Journal, 12(1):1–24, Jan. 1972. Zhu2017 J. Zhu, Z. Brzeźniak, and E. Hausenblas. Maximal inequalities for stochastic convolutions driven by compensated Poisson random measures in Banach spaces. Ann. Inst. Henri Poincaré Probab. Stat., 53(2):937–956, 2017.
http://arxiv.org/abs/2307.04056v2
20230708231953
Manifold Filter-Combine Networks
[ "Joyce Chew", "Edward De Brouwer", "Smita Krishnaswamy", "Deanna Needell", "Michael Perlmutter" ]
stat.ML
[ "stat.ML", "cs.LG", "cs.NA", "eess.SP", "math.NA" ]
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives [ ==================================================================== We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs), that aims to further our understanding of MNNs, analogous to how the aggregate-combine framework helps with the understanding of graph neural networks (GNNs). This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular GNNs. We then consider a method, based on building a data-driven graph, for implementing such networks when one does not have global knowledge of the manifold, but merely has access to finitely many sample points. We provide sufficient conditions for the network to provably converge to its continuum limit as the number of sample points tends to infinity. Unlike previous work (which focused on specific graph constructions), our rate of convergence does not directly depend on the number of filters used. Moreover, it exhibits linear dependence on the depth of the network rather than the exponential dependence obtained previously. Additionally, we provide several examples of interesting subclasses of MFCNs and of the rates of convergence that are obtained under specific graph constructions. § INTRODUCTION Geometric deep learning <cit.> is an emerging field that aims to extend the success of deep learning from data such as images, with a regular grid-like structure, to more irregular domains such as graphs and manifolds. As part of the rise of geometric deep learning, graph neural networks (GNNs) have rapidly emerged as an extremely active area of research in data science <cit.> and are also used in industrial applications such as Google Maps<cit.> and Amazon's product recommender system<cit.>. However, there has been much less work on the development of Manifold Neural Networks (MNNs) and much of the existing literature focuses on two-dimensional surfaces embedded in three-dimensional space <cit.>. In this paper, we consider the more general setting of a compact, connected, d-dimensional Riemannian manifold ℳ embedded in D-dimensional space. One of the principal challenges in extending deep learning to graphs and manifolds is developing a proper notion of convolution, which is non-trivial because there is no natural notion of translation. In the graph setting, a popular family of solutions, known as spectral methods, define convolution via the eigendecomposition of the graph Laplacian (or another suitable matrix). A limitation of this method is that explicitly computing eigendecompositions is expensive for large graphs. To overcome this obstacle, spectral graph neural networks such as ChebNet <cit.> and CayleyNet <cit.> define convolution in terms of polynomials of the graph Laplacian 𝐋=𝐃-𝐀. This leads to filters of the form h(𝐋)𝐱 where h is a polynomial and 𝐱 is a signal defined on the vertices of the graph. With this notion of convolution, one may consider networks with layerwise update rules of the form: 𝐱^(ℓ+1)=σ(h^(ℓ)(𝐋)𝐱^(ℓ)), where σ is a pointwise, nonlinear activation function. If one is given multiple initial graph signals 𝐱_1,…, 𝐱_C organized into a data matrix 𝐗=(𝐱_1,…,𝐱_C) and uses multiple filters in each layer, then the layerwise update rule can be extended to 𝐱^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(𝐋)𝐱^(ℓ)_k). If one assumes that each filter h^ℓ_j,k belongs to a parameterized family of functions such as Chebyshev polynomials, one could then attempt to learn the optimal parameters from training data. Inspired by this approach, Wang, Ruiz, and Ribeiro <cit.> have introduced manifold neural networks with layerwise update rules similar to (<ref>). In particular, they assume that they are given C functions f_1,…,f_C:ℳ:→ℝ and utilize a layerwise update rule of f^(ℓ+1)_k=σ(∑_j=1^C h^(ℓ)_j,k(ℒ)f^(ℓ)_k), where ℒ=-div∘∇ is the Laplace-Beltrami operator, the natural analog of the graph Laplacian in the manifold setting. They then provide an analysis of the stability of such networks to absolute and relative perturbations of the Laplace-Beltrami operator. However, many popular graph neural networks take an approach different than (<ref>). Rather than using multiple learnable filters for each input channel and then summing across channels, they instead filter each graph signal with a pre-designed operator (or operators) and then learn relationships between the filtered input signals. For example, the Graph Convolutional Network (GCN)[Here, we use the term GCN to refer to the specific network introduced in <cit.>. We will use the term GNN to refer to a general graph neural network] <cit.> performs a predesigned aggregation 𝐗→𝐀𝐗 where 𝐀=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 and utilizes a right-multiplication by a trainable weight matrix Θ to learn relationships between the channels. This leads to the layerwise update rule 𝐗^(ℓ+1)=σ(𝐀𝐗^(ℓ)Θ^(ℓ)), where σ is as in (<ref>).[The matrix 𝐀 can be obtained by applying the polynomial h(λ)=1-λ/2 to a normalized version of the graph Laplacian and then some adjustments which help with the training of the network. Therefore, we can essentially think of the operation 𝐱→𝐀𝐱 as a spectral convolution.] This raises an intriguing question: How should manifold neural networks be designed? Should they follow the lead of (<ref>) and (<ref>) and utilize multiple learnable filters for each input channel with a predesigned summation over channels or should they utilize predesigned filtering operations and incorporate learning via cross-feature operations analogous to (<ref>)? It is likely that the answer to this question will vary depending on the dataset and the task of interest. Networks with multiple learnable filters for each channel are more general and will have greater expressive power. On the other hand, networks that, for example, use a common (either learnable or designed) filterbank shared across all channels are a more constrained family of networks. This constraint imposes a certain structure on the network and reduces the number of trainable parameters, which may provide a useful inductive bias in certain settings and may be particularly useful in low-data environments. Another critical challenge in the development of manifold neural networks is that in many applications one does not have global knowledge of the manifold. Instead, one is given a collection of points {x_j}_j=1^n in some high-dimensional Euclidean space ℝ^D and makes the modeling assumption that the points x_j lie on some d-dimensional manifold for d≪ D. This assumption, known as the manifold hypothesis, is frequently used in the analysis of biomedical data arising from, e.g., single-cell imaging <cit.>. This leads us to the following question: How can one implement a manifold neural network when one does not have global knowledge of the manifold but only has access to finitely many sample points? In order to help answer this question, several works such as <cit.> have used an approach based on Laplacian eigenmaps <cit.> (see also <cit.>) where one builds a data-driven graph 𝐆_n such that the eigenvectors and eigenvalues of the graph Laplacian approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami Operator. They show that if the graph is constructed properly, then a graph neural network of the form (<ref>) will converge to a continuum limit of the form (<ref>) as the number of sample points, n, tends to infinity. However, these results are limited in the sense that (i) they assume specific graph constructions and (ii) their rates of convergence depend exponentially on the depth of the network. In this work, we introduce a new framework for understanding MNNs that we call Manifold Filter-Combine Networks. The manifold filter-combine paradigm is meant to parallel the aggregate-combine framework commonly considered in the GNN literature (see, e.g., <cit.>) and naturally leads one to consider many interesting classes of MNNs which may be thought of as the manifold counterparts of various popular GNNs. We then provide sufficient conditions for such networks to converge to a continuum limit as the number of sample points, n, tends to infinity. More specifically, the contributions of this work are: * We introduce Manifold Filter-Combine Networks as a novel framework for understanding MNNs. This framework readily leads one to many interesting classes of MNNs such as the manifold equivalent of Kipf and Welling's GCN <cit.>, learnable variations of the manifold scattering transform <cit.>, and many others. * In Theorem <ref>, we provide sufficient conditions for the individual filters used in an MNN to provably converge to a continuum limit as n→∞ if the filtering is done via a spectral approach. Here the rate of convergence depends on the rates at which the eigenvectors/eigenvalues of the graph Laplacian approximate the eigenfunctions/eigenvalues of the Laplace-Beltrami operator as well as the rate at which discrete inner products approximate continuum inner products. * In Theorem <ref>, we prove that if the individual filters converge as n→∞, then so does the entire MNN. The rate of convergence will depend on (i) the rate of convergence of the individual filters; (ii) the weights used in the network; (iii) the depth of the network. Importantly, we note that our dependence on the depth of the network is linear, rather than the exponential dependence obtained in previous work. Additionally, our rate does not directly depend on the number of filters used per layer. We also note that Theorem <ref> does not assume that the filters have any particular form. Therefore, if one were to prove results analogous to Theorem <ref> for non-spectral filters, then Theorem <ref> would immediately imply the convergence of networks constructed from those filters. * We then provide several corollaries to Theorem <ref>, which give concrete examples of our results in special cases of interest in Corollaries <ref>, <ref>, <ref>, and <ref>. These results may be summarized as follows: * If the filters are implemented spectrally, then the discretization error of the entire MFCN tends to zero at a rate depending on how fast the eigenvalues/eigenvectors of the Laplacian corresponding to the data-driven graph 𝐆_n converge to the eigenvalues/eigenfunctions of the continuum Laplacian and how fast discrete inner products converge to continuum inner products. * If 𝐆_𝐧 is constructed via a Gaussian kernel and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-2/(d+6)). * If 𝐆_𝐧 is constructed via a k-NN graph or an ϵ-graph and the filters are implemented spectrally, then (up to log factors) the discretization error is 𝒪(n^-1/(d+4)). §.§ Notation We let ℳ be a compact, connected, d-dimensional Riemannian manifold with normalized Riemannian volume form μ such that μ(ℳ)=1. We let 𝐋^2(ℳ) denote the set of functions that are square integrable with respect to μ and 𝒞(ℳ) denote the set of continuous functions on ℳ. We let ℒ=-div∘∇ denote the Laplace-Beltrami operator and let {ϕ_i}_i=1^∞ denote an orthonormal basis of eigenfunctions ℒϕ_i=λ_iϕ_i, with 0=λ_1<λ_2≤…. We will use these eigenfunctions to define Fourier coefficients denoted by f(i). In much of our analysis, we will assume that ℳ is unknown and that we only have access to a function f∈𝒞(ℳ) evaluated at sample points {x_j}_j=1^n⊆ℝ^D. In this setting, we will let P_n:𝒞(ℳ)→ℝ^n be the normalized evaluation operator (P_nf)(i)=1/√(n)f(x_i), and let 𝐆_n denote a graph whose vertices are the sample points x_j. We will let 𝐋_n denote the graph Laplacian associated to 𝐆_n and let ϕ_i^n be an orthonormal basis of eigenvectors, 𝐋_nϕ_i^n=λ^n_iϕ_i^n, 0=λ^n_1≤λ^n_2≤…≤λ^n_n. Analogous to the continuous setting, we will use the ϕ_i^n to define discrete Fourier coefficients 𝐱(i). In this paper, we consider a family of neural networks to process functions defined on ℳ. Towards this end, we will let F=(f_1,…,f_C) denote a row-vector valued function and let F^(ℓ) denote the hidden representation in the ℓ-th layer of our network, with F^(0)=F. When we approximate our network on 𝐆_n, we will instead assume that we are given an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C). §.§ Organization The rest of this paper is organized as follows. In Section <ref>, we will provide an overview of spectral convolution on manifolds, explain how to implement such networks on point clouds, and state a theorem providing sufficient criteria for the discrete point-cloud implementation to converge to the continuum limit as the number of sample points tends to infinity. In Section <ref>, we introduce manifold-filter combine networks, discuss several examples of networks contained in our framework, and state a theorem showing that a discrete point cloud implementation converges to the continuum limit as well as several corollaries focusing on specific graph constructions. In Appendices <ref> and <ref>, we will prove the theorems stated in Sections <ref> and <ref>. We will conduct numerical experiments in Section <ref>, before providing a brief conclusion in Section <ref>. § SPECTRAL CONVOLUTION ON MANIFOLDS As alluded to in the introduction, the extension of convolutional methods to the manifold setting is non-trivial because there is no natural notion of translation. Many possible solutions to this problem have been proposed including methods based on parallel transport <cit.>, local patches <cit.>, or Fréchet means <cit.>. In this section, we will focus on spectral methods that rely on a generalized Fourier transform defined in terms of the eigendecomposition of the Laplace-Beltrami operator. Let ℳ be a compact d-dimensional Riemannian manifold without boundary, and let ℒ be the Laplace-Beltrami operator on ℳ. It is well-known that ℒ has an orthonormal basis of eigenfunctions {ϕ_i}_i=1^∞ with ℒϕ_i=λ_iϕ_i, λ_i≥ 0. This implies that for f∈𝐋^2(ℳ), we may write f=∑_i=1^∞f(i) ϕ_i, where, for 1≤ i <∞, f(i) is the generalized Fourier coefficient defined by ⟨ f,ϕ_i⟩_𝐋^2(ℳ). Motivated by the convolution theorem in real analysis, we will define manifold convolution as multiplication in the Fourier domain. In particular, give a bounded measurable function w:[0,∞)→ℝ, we define a spectral convolution operator, w(ℒ):𝐋^2(ℳ)→𝐋^2(ℳ) by w(ℒ)f=∑_i=1^∞ w(λ_i) f(i) ϕ_i. By Plancherel's theorem, we may observe that w(ℒ)f_𝐋^2(ℳ)=(∑_i=1^∞ |w(λ_i)|^2|f(i)|^2)^1/2≤w_𝐋^∞([0,∞))f_𝐋^2(ℳ). Additionally, we note that since these spectral convolution operators are defined in terms of a function w:[0,∞)→ℝ, one may verify that the w(ℒ) does not depend on the choice of the orthonormal basis {ϕ_i}_i=1^∞. (See for example Remark 1 of <cit.>.) In our analysis of such filters, similar to <cit.> and <cit.>, we will assume that w is Lipschitz, and let A_Lip denote the smallest constant such that for all a,b∈[0,∞) we have |w(a)-w(b)| ≤ A_Lip(w)|a-b|. We will also assume that either f or w(ℒ) is bandlimited as defined below. Let κ>0, let f∈𝐋^2(ℳ), and let w(ℒ) be a spectral filter. We say that f is κ-bandlimited if f(i)=0 for all i>κ. Similarly, w(ℒ) is said to be κ-bandlimited if w(λ_i)=0 for all i>κ. §.§ Implementation of Spectral Filters on Point Clouds In many applications of interest, one does not know the manifold ℳ. Instead, one is given access to finitely many sample points x_1,…,x_n∈ℝ^D and makes the modeling assumption that these sample points lie upon (or near) an unknown d-dimensional Riemannian manifold for some d≪ D. In this setup, it is non-trivial to actually implement a neural network since one does not have global knowledge of the manifold. Here, we will use an approach based on manifold learning <cit.> where we construct a data-driven graph 𝐆_n, whose vertices are the sample points x_1,…,x_n, and use the eigenvectors and eigenvalues of the graph Laplacian 𝐋_n to approximate the eigenfunctions and eigenvalues of the Laplace-Beltrami operator. As we will discuss below, there are numerous methods for constructing 𝐆_n including k-nn graphs, ϵ-graphs, and graphs derived from Gaussian kernels. More specifically, we let {ϕ_i^n}_i=1^n be an orthonormal basis of eigenvectors, 𝐋_n ϕ_i^n = λ_i^n ϕ_i^n, 0=λ_1^n≤λ_2^n≤…λ_n^n, and analogous to (<ref>) we will write 𝐱=∑_i=1^n 𝐱(i) ϕ_i^n, 𝐱(i)=⟨𝐱,ϕ^n_i⟩_2 for 𝐱∈ℝ^n. We then define a discrete approximation of w(ℒ) defined by w(𝐋_n)𝐱=∑_i=1^∞ w(λ^n_i) 𝐱(i) ϕ^n_i. Our hope is that if 𝐆_n is constructed properly, then w(𝐋_n)P_nf-P_nw(ℒ)f_2 will converge to zero as n tends to infinity, where P_n:𝒞(ℳ)→ℝ^n is the normalized evaluation operator defined as in (<ref>). Notably, in order to bound w(𝐋_n)P_nf-P_nw(ℒ)f_2 we must account for three sources of discretization error: * The graph eigenvalue λ_i^n does not exactly equal the manifold eigenvalue λ_i. Intuitively, this should yield an error on the order of α_i,nA_Lip(w), where α_i,n=|λ_i-λ_i^n|. * The graph eigenvector ϕ_i^n does not exactly equal P_nϕ_i, the discretization of the true continuum eigenfunction. One may anticipate this yielding errors of the order β_i,n, where β_i,n=ϕ_i^n-P_nϕ_i_2. * The discrete Fourier coefficient 𝐱(i) is not exactly equal to f(i). Since Fourier coefficients are defined in terms of inner products, one expects this error to be controlled by a term γ_n which describes how much discrete inner products ⟨ P_n f,P_n g⟩_2 differ from continuum inner products ⟨ f,g⟩_𝐋^2(ℳ). Combining these sources of error, and letting α_n=max_iα_i,n,β_n=max_iβ_i,n, one anticipates that if either f or w(ℒ) is κ bandlimited, then the total error will be 𝒪(κ(α_nA_Lip(w)+β_n+γ_n)). This intuition is formalized in the following theorem. For a proof, please see Appendix <ref>. Let w:[0,∞)→ℝ, w_𝐋^∞([0,∞))≤ 1, let f∈𝐋^2(ℳ) be a continuous function, and assume that either f or w(ℒ) is κ-bandlimited. Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that for all 1≤ i ≤κ and for n sufficiently large, we have |λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n, |⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ), Then for n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1, we have w(𝐋_n)P_nf-P_nw(ℒ)f_2≤ C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)). Furthermore, for all n large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1 and all 𝐱∈ℝ^n, we have w(𝐋_n)𝐱-P_nw(ℒ)f_2≤𝐱-P_nf_2 + C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)), where, in both (<ref>) and (<ref>), C_ℳ is a constant depending on the geometry of ℳ. In particular, if 𝐱=P_nf, (<ref>) implies that lim_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2=0. Inspecting the proof of Theorem <ref>, one may note that A_Lip(w) may actually be replaced by the Lipschitz constant on the smallest interval containing all λ_i and all λ_i^n, 1≤ i ≤κ, where λ_i≠λ_i^n. This means that, if f is bandlimited, our result may be applied to any continuously differentiable function w. Moreover, for most common graph constructions, we have λ_1=λ_1^n=0 and 0<λ_2,λ_2^n. This implies that our theorem can be applied to any w which is continuously differentiable on (0,∞) even if, for example, lim_t→ 0^+w'(t)=+∞ (which is the case for certain wavelets, such as those considered in <cit.>). Additionally, we note that with minor modifications, results similar to Theorem <ref> may be obtained for functions or filters which are approximately bandlimited in the sense that either sup_k>κ|w(λ_k)| or ∑_k>κ|f(k)|^2 are sufficiently small. In these cases, we will have lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤sup_k>κ|w(λ_k)|f_𝐋^2(ℳ) or lim sup_n→∞w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤w_∞(∑_k>κ|f(k)|^2)^1/2. In particular, results similar to Theorem <ref> may be obtained for filters w_t(λ) e^-tλ, which correspond to the heat kernel. In the following section, we will consider neural networks constructed from spectral filters and use Theorem <ref> to show that discrete approximations of such networks converge to their continuum limit as n→∞. However, first, we will consider several examples of graph constructions where estimates for α_n and β_n are known. In all of the examples below, we will assume that the data points x_i are generated i.i.d. uniformly at random (with respect to the normalized Riemannian volume form μ). In this setting, Lemma 5 of <cit.> implies that with probability at least 1 - 𝒪(1/n^9) we have γ_n = (18log(n)/n)^1/4. We note that in <cit.> the inequality (<ref>) was derived via Hoeffding's inequality which is why the definition of γ_n involves the ℓ^∞ norm of fg. However, if one were to use a different method, such as Bernstein's inequality to derive bounds for |⟨ P_nf, P_ng ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| in terms of other norms, then all of our proof techniques could likely be pushed through to obtain results similar to Theorem <ref>. [Gaussian Kernels] One simple way to construct a graph is with a Gaussian kernel. Specifically, given a bandwidth parameter ϵ, we define a weighted adjacency matrix 𝐖_ϵ whose entries are given by [𝐖_n,ϵ]_i,j = 1/nϵ^1 + d/2e^-𝐱_i - 𝐱_j_2^2 / ϵ and let 𝐃_n,ϵ be the corresponding diagonal degree matrix. Then the associated graph Laplacian 𝐋_n,ϵ is 𝐋_n, ϵ = 𝐃_n, ϵ-𝐖_n, ϵ. In this case, if ϵ∼ n^-2/(d+6), and the data points x_i are generated i.i.d. uniformly at random, then Theorem 5.4 of <cit.> implies that, under mild assumptions, we may choose α_n = C_ℳ n^-2/d+6, β_n = C_ℳ n^-2/d+6√(log(n)), with probability at least 1 - 𝒪(1/n^9)[For details on how to deduce (<ref>) from Theorem 5.4 of <cit.> we refer the reader to Remark 1 of <cit.> and the proof of Theorem 10 of <cit.>.]. Estimates such as these were used to analyze the convergence of the manifold scattering transform on Gaussian-kernel graphs in <cit.> and more general MNNs in <cit.> and <cit.>. While constructing a graph from a kernel is simple, it has the drawback of producing dense graphs which pose computational issues for large values of n. Therefore, we also consider two methods for constructing sparse graphs that have previously been analyzed in works such as <cit.> and <cit.>. [ϵ-graphs] Let ϵ>0, let η:[0,∞)→ [0,∞) be a nonincreasing function supported on the interval [0,1] such that η(1/2)>0 and the restriction of η to [0,1] is Lipschitz continuous. A weighted ϵ-graph is constructed by placing an edge between all x_i,x_j such that |x_i-x_j|≤ϵ. Then, if x_i and x_j are connected by an edge, the corresponding entry in a weighted adjacency matrix is given by [𝐖_n,ϵ]_i,j=η(|x_i-x_j|/ϵ). The ϵ-graph Laplacian is then given by 𝐋=c_η/nϵ^d+2(𝐃_n,ϵ-𝐖_n,ϵ), where c_η is the constant c_η = ∫_ℝ^d |y_1|^2 η(|y|)dy, and y_1 is the first coordinate of a vector y ∈ℝ^d, and 𝐃_n,ϵ is the weighted degree matrix corresponding to 𝐖_n,ϵ. Theorems 2.4 and 2.7 of <cit.> show, for example, that if ϵ is chosen as ϵ∼ ( log(n)/n )^1/d+4, then, under mild assumptions, we may choose α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4 with probability at least 1 - 𝒪(n^-9). Estimates similar to (<ref>) were used to analyze the convergence of MNNs on ϵ-graphs in <cit.> and <cit.>. The graph Laplacians of ϵ-graphs are sparse by construction, and their sparsity is indirectly controlled by the length scale parameter ϵ. To directly control the sparsity of the graph Laplacian in an adaptive manner without specifying a length scale, one may also consider k-NN graphs. [k-NN graphs] For a positive integer k, symmetric k-Nearest Neighbor (k-NN) graphs are constructed by placing an edge between x_i and x_j if x_j is one of the k closest points to x_i (with respect to the Euclidean distance) or[One might also consider mutual k-NN graphs where we require x_i to be one of the k closest points to x_j and x_j to be one of the k-closest points to x_i. However, such graphs are not analyzed in the theorem we cite from <cit.>.] if x_i is one of the k closest points to x_j. Then, the edges can be given weights in a manner similar to <Ref>. Formally, let ϵ_k(x_i) denote the distance from x_i to its k-th closest neighbor (with respect to Euclidean distance) and let r_k(x_i,x_j) max{ϵ_k(x_i),ϵ_k(x_j)}. Then, if x_i and x_j are connected by an edge in the k-NN graph, the corresponding entry in a weighted adjacency matrix is given by [𝐖_n,k]_i,j = η ( |x_i - x_j|/r_k(x_i,x_j) ) where η satisfies the same assumptions as in <Ref>. Note that if η(t) = χ_[0,1](t), then we obtain the standard unweighted k-NN graph. The k-NN graph Laplacian is then given by 𝐋_n,k=c_η/n(nc_d/k)^1+2/d(𝐃_n,k-𝐀_n,k), where c_η is defined as in <Ref>, c_d is the volume of the d-dimensional Euclidean unit ball, 𝐖_n,k is the unweighted adjacency matrix associated with the k-NN graph, and 𝐃_n,k is the corresponding degree matrix. If η(t) = χ_[0,1](t), then c_η = c_d/d+2. Theorems 2.5 and 2.9 of <cit.> show that, for example, if k is chosen as k ∼log(n)^d/d+4 n^4/d+4, then, under mild assumptions, we may choose α_n = C_ℳ ( log(n)/n )^1/d+4, β_n = C_ℳ ( log(n)/n )^1/d+4 with probability at least 1 - 𝒪(n^-9). Corollary <ref> stated in Section <ref> applies these estimates to establish the convergence of MFCNs for k-NN graphs. To the best of our knowledge, this is the first result to establish a quantitative rate of convergence for MNNs in this setting. Comparing the examples above, we see that the rates of convergence are faster for dense graphs. Therefore, they may be preferable when n is only moderately large, but one still desires a good approximation of the continuum. However, for very large n, dense graphs become expensive to store in memory. Therefore, one might instead prefer to utilize either ϵ- or k-NN graphs. We also note that the theorems discussed above do not explicitly guarantee that P_nϕ_i≈ϕ_i^n. Instead, they show that P_nϕ_i≈±ϕ_i^n. However, as discussed earlier our spectral filters do not depend on the choice of orthonormal basis. Therefore, we may ignore this issue when applying Theorem <ref>. § MANIFOLD FILTER-COMBINE NETWORKS In this section, we introduce a novel framework for thinking about manifold neural networks. We will refer to the networks we consider as Manifold Filter-Combine Networks paralleling the aggregate-combine framework commonly used in the graph setting (see, e.g., <cit.>). Here, we will use the term filter, rather than aggregate because our filters may be arbitrary linear operators on 𝐋^2(ℳ) (which in most examples will be defined in terms of some notion of convolution) and are not required to be localized averaging operations. Much of our analysis (except for Theorem <ref>) focuses on the case that the filtering step is implemented in the spectral domain. In this case, the class of all MFCN coincides with the class of MNNs considered in previous work such as <cit.>. However, even in the spectral case, we find that the filter-combine paradigm is a useful framework for thinking about MNNs since it naturally leads one to many interesting subclasses of networks and also allows us to obtain convergence rates that do not directly depend on the width of the network. We will assume that our input data is a row-vector[We define the output of F to be ℝ^1× C in order to highlight the parallels with the data matrices commonly considered in the GNN literature where rows correspond to vertices and columns correspond to features.] valued function F∈𝐋^2(ℳ,ℝ^1× C), F=(f_1,…,f_C), where each f_i∈𝐋^2(ℳ). Each hidden layer of the network will consist of the following five steps: (i) filtering each input channel f_k by a family of linear operators W_j, 1≤ j≤ J, (ii) For each fixed j, we combine the filtered feature functions f̃_j,k=(W_jf_k) into new feature functions g_j,k where each g_j,k is a linear combination of the f̃_j,k, (iii) For each fixed k, we perform a cross-channel convolution that maps { g_j,k}_j=1^J to {g̃_j,k}_j=1^J' where each g̃_j,k is a linear combination of the g_j,k, (iv) apply some non-linear, nonexpansive pointwise activation function σ to each of the g̃_j,k, to obtain h_j,k=σ∘g̃_j,k, (v) reshape the collection of functions {h_i,j}_1≤ i ≤C̃,1≤ j≤ J' into {f'_i}_i=1^C', where C'=C̃J'. In many applications, it may be sufficient to use a common filter bank {W_j}_1≤ j≤ J for all input channels. However, in other settings, it may be useful to give the network additional flexibility to learn different filters along different input signals. Therefore, for the sake of generality, we actually define the filtering step by f̃_j,k=(W_j,kf_k), where for each fixed k, {W_j,k}_1≤ j ≤ J is a collection of linear operators (i.e., filters) to be applied to the input channel f_k. Explicitly, we define our layerwise update rule in the following manner. Let F^(0)=F, C_0=C and given F^(ℓ)=(f_1^(ℓ),…,f_C_ℓ^(ℓ)), we define F^(ℓ+1)=(f_1^(ℓ+1),…,f_C_ℓ+1^(ℓ+1)) via: Filtering: f̃^(ℓ)_j,k=W^(ℓ)_j,kf^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ Combine: g_j,k^(ℓ)=∑_i=1^C_ℓf̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Cross-Channel Convolution: g̃_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,ig_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ Activation: h_j,k^(ℓ)=σ^(ℓ)∘g̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Reshaping: f^(ℓ+1)_(j-1)C_ℓ+k = h^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ, where C_ℓ+1=J'_ℓ C_ℓ', and the reshaping operator allows for multiple layers to be stacked upon each other. Importantly, we note one may effectively omit the combine step by setting the matrix Θ^(ℓ,j)(θ_i,k^(ℓ,j))_1≤ i,k≤ C_ℓ equal to the identity matrix for each ℓ and j. Similarly, one may omit the cross-channel convolutions by setting the matrices (α_j,i^(ℓ,k))_1≤ i,j≤ J_ℓ to the identity. Additionally, we note that since we allow for the possibility of using different filters along each channel, it is, in general, possible to write the same network as an MFCN in more than one way. For instance, if one fixes the cross channel convolutions equal to the identity, uses a shared filter bank {W^(ℓ)_j}_1≤ j ≤ J (independent of k) and chooses the combine step to be independent of j (i.e. θ_i,k^(ℓ,j)=θ_i,k^(ℓ)) then we have f^(ℓ+1)_(j-1)C_ℓ+k = σ^(ℓ)(∑_i=1^C_ℓW^(ℓ)_jθ^(ℓ)_i,kf_i), which may also be obtained by using filters of the form W^(ℓ)_(j-1)C_ℓ+k,i=W_jθ^(ℓ)_i,k and using a combine step with θ̃_i,k^(ℓ,j)=1. Therefore, the set of networks that may be obtained by setting θ_i,k^(ℓ,j)=1 is just as large as the set of all MFCN. A similar conclusion holds for the cross-channel convolutions. Therefore, in the case where all filters are implemented in the spectral domain, the class of MFCNs is actually the same as the class of MNNs considered in previous work such as <cit.> (see Example <ref> below). However, as alluded to earlier, we find that thinking of the filtering, combination, and cross-channel convolutions steps separately is a useful framework for a couple of reasons. First, it facilitates our mathematical analysis of the convergence rate obtained in Corollary <ref> and in particular allows us to produce rates that depend only linearly on the depth of the network and do not directly depend on the network's width. Second, it highlights a variety of natural subclasses of networks that may be useful for various data sets or tasks of interest. For instance, each piece of the architecture can either be designed in advance or learned from data. Moreover, one may choose to use a common filter bank W_j, 1≤ j≤ J for all input functions and in all layers or one may choose to use different filters in each layer and/or for each signal. Below we will consider several examples of such classes, but first, we remark that our analysis does not depend on the order in which the steps are performed. Therefore, the theoretical guarantees obtained in Theorem <ref> and Corollary <ref> also apply, for example, to networks in which the cross-channel convolutions occur after the activation. Additionally, we note that one may make different choices in each layer. For example, one may use a hand-crafted filter bank in the first several layers and then a learnable filter bank in the later layers. Similarly, the activation functions may vary from one layer to the next. However, we will often depress the dependence of the activation function on the layer and simply write σ in place of σ^(ℓ). [Different Filters Along Each Channel] If we set the cross-channel convolution equal to the identity, set C_ℓ'=1 and set θ_i,k^(ℓ,j)=1 then we obtain the layerwise update rule f^(ℓ+1)_j=σ(∑_j=1^CW^(ℓ)_j,kf_k). If each of the W_j,k^(ℓ)=w^(ℓ)_j,k(ℒ) is a spectral filter (as defined in Section <ref>), we then obtain the layerwise update rule f^(ℓ+1)_j=σ(∑_j=1^Cw^(ℓ)_j,k(ℒ)f_k). which was introduced in <cit.> and has been subsequently studied in <cit.>. Notably, in this example the reshaping operator is the identity (since C'_ℓ=1)) and the filters W_j,k^(ℓ) depend on both the layer ℓ and the input channel k. As mentioned above (see the discussion surrounding (<ref>)), this class of networks is the most general and actually includes all MFCNs. However, considering, e.g., the filter and combine steps separately helps facilitate our analysis. For instance, our rate of convergence obtained in Theorem <ref> depends on max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), but unlike the results obtained in previous work does not directly depend on the width of the network. In particular, if we set θ_i,k^(ℓ,j)=1/C_ℓ, then we have max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|)=1. [Shared Filter Banks Along Each Channel] In order to reduce the number of trainable parameters, it may be useful to utilize a (learned) filter bank which is shared across all input channels and a combination matrix which is shared across all filters. In this case, one obtains a layerwise update rule of the form (<ref>). Such networks may loosely be thought of as a low-rank subset of the more general networks discussed in Example <ref>. (In this setting, since the filter banks are learned, there is still no need for cross-channel convolutions.) Due to the irregularity of the data geometry, many popular GNNs such as the GCN of Kipf and Welling <cit.> use predesigned aggregations and incorporate learning through the combine steps. The next example discusses the analog of such networks on manifolds. [MCNs] Set the cross-channel convolutions equal to the identity and let J=J'=1. Let A be a fixed operator which should be thought of as either a low-pass filter or a localized averaging operator, and set W^(ℓ)_i,1=A for all i. Let the matrix Θ^(ℓ) = (θ^(ℓ,1)_i,k)_1≤ i≤ C_ℓ,1≤ k ≤ C'_ℓ be a learnable weight matrix. Then our layerwise update rule becomes f_k^ℓ+1=∑_i=1^C_ℓAf_kθ_i,k^(ℓ,1) which may be written compactly as F^(ℓ+1)=σ(AF^(ℓ)Θ^(ℓ)). Therefore, we obtain a network similar to the GCN of Kipf and Welling which we refer to as the manifold convolutional network (MCN). Notably, A can be designed in a variety of ways, but one possible choice is to define it in the spectral domain where w is a non-increasing function such as an idealized low-pass filter w(λ)=1_λ≤ a or setting w(λ)=e^-tλ which corresponds to convolution against the heat kernel. Additionally, one could consider the filter bank consisting of powers of A, i.e. W^(ℓ)_j=A^j, 1≤ j ≤ J, use a different combine matrix in each channel, and employ a simple cross-channel convolution by setting α_j,i^(ℓ,k)=1. In this case, one obtains a layerwise update rule of the form F^(ℓ+1)=σ(∑_j=1^JA^JF^(ℓ)Θ^(ℓ,j)), which can be thought of the manifold analog of the higher-order GCNs considered in work such as <cit.>. Similar to the above example, one could also consider the manifold analogs of other popular spectral GNNs such as ChebNet<cit.> or CayleyNet<cit.>. Our framework also includes the manifold scattering transforms. [Hand-Crafted Scattering Networks] Let {W_j}_j=1^J be a predesigned collection of filters, which are thought of as wavelets and do not depend on the layer or the input channel. Set the combine and cross-channel convolutions equal to the identity. One then obtains an entirely predesigned, multilayered network known as the manifold scattering transform. Such networks were considered in <cit.> in order to analyze the stability of and invariance properties of deep learning architectures defined on manifolds, building off of analogous work for Euclidean data <cit.> and graphs <cit.>. [Learnable Scattering Networks] For both Euclidean data and graphs, there have been a variety of papers that have introduced learning into the scattering framework. In the Euclidean setting, <cit.> created a network that acts as a hybrid of the scattering transform and a CNN using predesigned, wavelet filter in some layers and learnable filters in others. Subsequent work by <cit.> introduced learning in a different way, incorporating cross-channel convolutions into an otherwise predesigned network. One may construct an analogous MFCN that corresponds to utilizing a predesigned filter bank {W_j}_j=1^J which is shared across all channels, setting the combine step equal to the identity, and letting α_j,i^(ℓ,k) be learnable. (Traditionally, scattering networks have used |·| as the activation function, but one could readily use other choices instead.) In the graph setting, <cit.> incorporated learning into the scattering framework by utilizing using predesigned wavelet filters, but learnable combine matrices (along with a few other features to boost performance). In a different approach, <cit.> sought to relax the graph scattering transform by replacing dyadic scales 2^j with an increasing sequence of scales t_j which are learned from data via a selector matrix. To obtain an analogous MFCN, we set W_j=e^-jℒ for 0≤ j ≤ J, which diffuses the input signal over the manifold at different time-scales, corresponding to the diffusion module utilized in <cit.>. We then set the combination step equal to the identity and learn relationships between the diffusion scales via cross-channel convolutions (where the cross-channel convolutions utilized in <cit.> have a certain structure that encourages the network to behave in a wavelet-like manner). Additionally, as has previously been noted in <cit.>, these two forms of learnable geometric scattering are compatible and one could readily utilize learnable combine steps while also using cross-channel convolutions to learn relationships between diffusion scales. Lastly, we also note that our framework includes simple multilayer perceptrons. [Multilayer Perceptron] If one sets J_ℓ=1 and sets both W_1,k^(ℓ) and the cross-channel convolution to be the identity operator then one obtains a simple dense layer that does not utilize the geometry of the manifold. In some sense, this is contrary to our goal of developing networks that utilize the manifold structure of the data. However, including some simple dense layers might nevertheless be useful for, for example, reducing the number of channels in the network. §.§ Implementation from point clouds As alluded to earlier, in many applications one does not have global knowledge of the manifold ℳ and merely has access to n data points {x_j}_j=1^n and evaluations of F at those data points. This leads us to recall the normalized evaluation operator (P_nf)(j)=1/√(n)f(x_j) and approximate F by an n× C data matrix 𝐗=(𝐱_1,…,𝐱_C), where 𝐱_k=P_nf_k. One may then implement an approximation of the network via the discrete update rules. Filtering: 𝐱̃^(ℓ)_j,k=𝐖^(ℓ)_j,k𝐱^(ℓ)_k, 1≤ j ≤ J_ℓ, 1≤ k≤ C_ℓ Combine: 𝐲_j,k^(ℓ)=∑_i=1^C_ℓ𝐱̃^(ℓ)_j,iθ^(ℓ,j)_i,k, 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Cross-Channel Convolution: 𝐲̃^(ℓ)_j,k= ∑_i=1^J_ℓα^(ℓ,k)_j,i𝐲_i,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ Activation: 𝐳_j,k^(ℓ)=σ∘𝐲̃_j,k^(ℓ), 1≤ j≤ J_ℓ, 1≤ k ≤ C'_ℓ Reshaping: 𝐱^(ℓ+1)_(j-1)C_ℓ+k = 𝐳^(ℓ)_j,k, 1≤ j≤ J_ℓ',1≤ k≤ C'_ℓ where 𝐖_j,k^(ℓ) is a matrix which acts as a discrete approximation of W_j,k^(ℓ). The following theorem shows that the discrete implementation will converge to its continuum counterpart in the sense that P_n F^(ℓ)≈𝐗^(ℓ) if the matrices 𝐖_j,k^(ℓ) are designed so that 𝐖_j,k^(ℓ)P_n f_k^(ℓ)≈ P_n W_j,kf_k^(ℓ). For a proof, please see Appendix <ref>. Let f ∈𝒞(ℳ), and suppose that for all ℓ, there exists ϵ_ℓ>0 such that we have P_nW_j,k^(ℓ)f_k^(ℓ)-𝐖^(ℓ)_j,k𝐱^(ℓ)_k_2 ≤𝐱^(ℓ)_k-P_nf_k^ℓ_2+ ϵ_ℓ,n for all 1≤ k ≤ C_ℓ. Let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^J_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|. Then, 𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n. Notably, Theorem <ref> does not assume the filters are constructed in the spectral domain nor does it assume they have any particular form. It is a general result that shows that if individual filters converge, then so does the multilayer network. Moreover, if the weights α_j,i^(ℓ,k) and θ_j,i^(ℓ,j) are normalized so that the A_1^(j)=A_2^(j)=1, then the rate of the convergence is linear in the depth of the network. This is in contrast to previous results in <cit.> whose rate of convergence featured an explicit exponential dependence on the depth of the network. (A similar exponential dependence was also encountered in <cit.> where the limiting object is a graphon rather than a manifold.) Combining Theorem <ref> with Theorem <ref> immediately leads to the following corollary which gives a quantitative rate of convergence for Manifold Filter-Combine Networks constructed utilizing spectral filters when either the filter or the input signals are bandlimited. Notably, if one proves theorems analogous to Theorem <ref> for other classes of filters (constructed either by spectral or not spectral methods) such as the α-FDT filters considered in <cit.> or the closely related γ-FDT filters considered in <cit.>, then one may immediately obtain similar corollaries.[Such results were obtained for α-FDT filters with specific graph constructions in <cit.>.] Assume that each W_j,k^(ℓ) is a spectral filter of the form W_j,k^(ℓ)=w_j,k^(ℓ)(ℒ) with w_j,k^(ℓ)_𝐋^∞([0,∞))≤ 1, and the matrices 𝐖_j,k are given by 𝐖_j,k^(ℓ)=w_j,k^(ℓ)(𝐋_n). As in Theorem <ref>, let A_1^(ℓ)=max_j,k(|∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|), A_2^(ℓ)=max_j,k(∑_i=1^C_ℓ |α_j,i^(ℓ,k)|) and assume that σ is non-expansive, i.e. |σ(x)-σ(y)|≤ |x-y|. Let A^(ℓ)_maxLip=max_j,k,A_Lip(w^(ℓ)_j,k). Assume that there exist sequences of real numbers {α_n}_n=1^∞, {β_n}_n=1^∞, {γ_n}_n=1^∞, with lim_n→∞α_n=lim_n→∞β_n=lim_n→∞γ_n=0, such that |λ_i-λ^n_i|≤α_n, P_nϕ_i-ϕ_i^n_2≤β_n, |⟨ f, g ⟩_2 - ⟨ f,g⟩_𝐋^2(ℳ)| ≤γ_n^2fg_𝐋^∞(ℳ), Assume n is large enough such that (<ref>) holds and α_n,β_n,γ_nκ^1/2≤ 1. Then, the error in each channel of the ℓ-th layer satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j) C_ℳκmax_k'((A^(i)_maxLipα_n+β_n)f^(i)_k'_𝐋^2(ℳ)+γ_nf^(i)_k'_𝐋^∞(ℳ)). In particular, if we assume that we have A_1^(j), A_2^(j), A^(i)_maxLip≤ 1, for all i and j we have 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ((α_n+β_n)max_k',if^(i)_k'_𝐋^2(ℳ)+γ_nmax_k',if^(i)_k'_𝐋^∞(ℳ)). In <Ref>, we provided several examples of α_n, β_n, and γ_n for three graph constructions. Using <Ref>, we immediately obtain the following three corollaries giving rates of convergence for each of these constructions. Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven graph 𝐆_n constructed as in <Ref> with a Gaussian kernel. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ(√(log(n))/n^2/(d+6)max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)). Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven ϵ-graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)). Assume the same conditions on W_j,k^(ℓ), 𝐖_j,k, A_1^(ℓ), A_2^(ℓ), A^(ℓ)_maxLip, and σ as in <Ref>, and assume A_1^(j), A_2^(j), A^(i)_maxLip≤ 1. Assume an MFCN is implemented with a data-driven k-NN graph 𝐆_n constructed as in <Ref>. Then with probability 1 - 𝒪(1/n^9), for large enough n, the error in each channel of the ℓ-th layer of the MFCN satisfies 𝐱_k^ℓ-P_nf_k^ℓ_2≤ C_ℳκℓ( ( log(n)/n )^1/d+4max_k',if^(i)_k'_𝐋^2(ℳ)+ (18log(n)/n)^1/4max_k',if^(i)_k'_𝐋^∞(ℳ)). § NUMERICAL EXPERIMENTS In this section, we compare the performance of three different examples of manifold filter-combine networks on the ModelNet dataset<cit.>. In particular, we focus on the MNN with different learnable filters in each channel (DLF), the MCN, and the manifold scattering transform (Scattering) discussed in Examples <ref>, <ref>, and <ref>. The code for reproducing our experiments is available at <https://github.com/KrishnaswamyLab/mfcn>. §.§ Data We used the ModelNet10 dataset which consists of three-dimensional point clouds sampled from various objects belonging to the classes bathtub, bed, chair, desk, dresser, monitor, nightstand, sofa, table, and toilet. Examples of point clouds in the dataset are given in Figure <ref>. For each point cloud, we preprocess the data by scaling the point coordinates (z-scaling), then randomly sample 100 points from the whole point cloud. We then create a graph via the constructions discussed in Examples <ref>, <ref>, and, <ref>, i.e., Gaussian kernels (dense), ϵ-graphs, and unweighted k-NN graphs. We use the x, y, and z coordinates of the nodes as input signals. The ModelNet10 dataset comes with a predefined training set (3901 samples) and test set (799 samples). In our experiments, we randomly select 20% of the training set to use for validation. We then consider two regimes. In the full data regime, we use the entire remaining 80% of for training. In the subset data regime, we randomly select 1000 samples from that 80% to use for training. We repeat this procedure five times and report our accuracies in the format mean ± std. §.§ Models In our experiments, we consider three manifold neural network architectures as described below. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. For further details of our hyperparameter settings and training procedures please see Table <ref> in Appendix <ref>. Scattering We follow the experimental procedure utilized in <cit.> and compute zeroth-, first-, and second-order scattering moments. More specifically, for 0≤ j≤ J and 1≤ q≤ Q, we define first-order, q-th scattering moments by Sf[j,q]∫_ℳ|W_jf(x)|^qdx=W_jf_𝐋^q(ℳ)^q, where W_j are spectral wavelet filters corresponding to the functions w_j(λ)=e^2^j-1λ-e^2^jλ for 1≤ j≤ J and w_0(λ)=1-e^-λ. We define second-order moments, for 0≤ j<j'≤ J, by Sf[j,j',q]∫_ℳ|W_j'|W_jf(x)||^qdx=W_j'|W_jf|_𝐋^q(ℳ)^q. Zeroth-order moments are defined simply by Sf[q]∫_ℳ|f(x)|^qdx=f_𝐋^q(ℳ)^q. In our experiments, we set J=8, Q=4 and use the first 20 eigenvalues and eigenvectors of the graph Laplacian to implement the spectral wavelet filters. DLF We used two layers of DLF, where each layer consists of J_ℓ spectral filters (J_1=16, J_2=32). After applying the J_ℓ filters per input dimensions, we combined the channels by summation (i.e., θ^(ℓ,j)_i,k = 1). Similarly, as for scattering, we used the first 20 eigenvalues and eigenvectors of the Laplacian matrix to compute our filters. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier. We considered two parameterizations of the filters w(λ), one denoted DLF-MLP, where we parametrize each w(λ) as a 2-layer MLP, and the other denoted DLF-POLY, in which we parameterize each w(λ) as a degree-four polynomial of e^-λ (which is the parameterization utilized in, e.g., <cit.>). MCN We used two layers of graph convolutional networks with J_l (J_1=16, J_2=32) hidden dimension applied to the input graph with ReLU activations. As in <cit.>, our low-pass filter was implemented by 𝐀̂=(𝐃+𝐈)^-1/2(𝐀+𝐈)(𝐃+𝐈)^-1/2 which is equivalent to applying the spectral filter w(λ)=1-λ/2 to the normalized graph Laplacian and then utilizing a renormalization trick in order to facilitate the learning process. We used a ReLU activation and the identity map for the cross-channel convolution. We used average pooling at the last layer to obtain the feature vector to be processed by the classifier. §.§ Results We compared the performance of the different models and graph construction based on the classification accuracy on the left-out test set. In Table <ref>, we report the mean and standard deviation of the test accuracy across the five different splits (5-folds) for both the full and subset data regimes. All of the models consistently perform much better than random chance (which is roughly 10% accuracy since there are ten classes) but are all far from 100% accuracy. In particular, in the full data regime, accuracy levels range from 54% to 75% and from 44% to 70% in the subset data regime. Overall the two versions of DLF are the best performing methods, particularly on the Dense graphs and the Epsilon Graphs. We note that DLF-MLP outperforms DLF-POLY in four out of six cases, but has the drawback of requiring more parameters. On the k-NN graphs, MCN performs nearly as well as DLF, but is the least accurate method on the dense graph construction. Scattering is overall the lowest performing method. However, its performance is the least affected by the number of samples. For instance, on the dense graph construction, it loses four percentage points of accuracy compared to MCN and DLF which lose ten and nine points. This suggests that the wavelet filters are useful geometric descriptors, but that overly hand-crafted networks lack the flexibility to learn from data. § CONCLUSION We have introduced a new framework for analyzing and implementing manifold neural networks that we call manifold filter-combine networks. This framework naturally allows us to think about many interesting classes of MNNs such as the manifold analogs of GCNs and several relaxed variations of the manifold scattering transform. Additionally, we have provided methods for implementing such networks when one does not have global knowledge of the manifold, but merely has access to n sample points, that converge provably to their continuum limit as n→∞. In order to establish this result, we also prove a theorem establishing sufficient convergence conditions for the individual filters used in the network. This result is not specific to any particular graph construction. Instead, it shows that if the eigenvectors and eigenvalues of the graph Laplacian converge (and additionally that discrete inner products converge to continuum inner products) then spectral filters constructed from the graph Laplacian will converge as well. This allows our results to be applied to a wide variety of graph constructions including those discussed in Examples <ref>, <ref>, and <ref>. The flexibility of our setup is deliberate. The development of manifold neural networks is in its infancy, even compared to graph neural networks, and there are many questions about which networks will perform best in practice. Should networks use learnable filter banks similar to a CNN or predesigned averaging operations similar to a common aggregate-combine network? Are cross-channel convolutions a viable way to introduce learning in settings where there are no nontrivial relations between input channels? In this work, we do not claim to provide an answer to the question “what are the best ways to design a manifold neural network?" which ultimately will need to be answered through thorough experimentation. The purpose of this paper is instead to facilitate this experimentation by providing a useful framework for thinking about MNNs. We also note several other important areas of future work. (i) In examples <ref>, <ref>, and <ref>, we consider settings where the data points {x_i} lie exactly on the manifold and are sample i.i.d. uniformly at random. Relaxing these assumptions would greatly increase the applicability of our theory to noisy real-world data. (ii) Most of the data sets used in the MNN literature focus on two-dimensional surfaces. Developing challenging and relevant benchmarks for learning on higher-dimensional manifolds would help facilitate the experimental exploration of various MNN architectures. § ACKNOWLEDGEMENT The authors thank Luana Ruiz for helpful discussion that greatly improved the quality of our exposition. plain § THE PROOF OF THEOREM <REF> We first note that if either w or f is κ bandlimited, we have w(𝐋_n)P_nf-P_nw(ℒ)f_2 = ∑_i=1^κ w(λ_i^n)⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n - ∑_i=1^κ w(λ_i)⟨ f,ϕ_i⟩_ℳP_nϕ_i_2 ≤ ∑_i=1^κ (w(λ_i^n)-w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2. To bound the first term from (<ref>), we note that by the triangle inequality, the Cauchy-Schwarz inequality, and the assumption that n is large enough so that α_n≤ 1, we have ∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2 ≤ max_1≤ i ≤κ |w(λ_i^n)- w(λ_i)| ∑_i=1^κP_n f_2 ϕ_i^n^2_2 ≤ A_Lip(w)α_n ∑_i=1^κP_n f_2 ϕ_i^n^2_2 ≤ A_Lip(w)κα_n P_n f_2 ≤ A_Lip(w)κ(α_n f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)), where we use the fact that ϕ_i^n_2^2=1 and that P_nf_2≤(f_𝐋^2(ℳ)^2 + γ_n^2f_𝐋^∞(ℳ)^2)^1/2≤f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ). Now, turning our attention to the second term from (<ref>), we have ∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i)_2 ≤ ∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2 +∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_𝐋^2(ℳ)P_nϕ_i_2. By the assumption (<ref>), we have ϕ_i^n-P_nϕ_i_2≤β_n. Therefore, since w non-amplifying, we see ∑_i=1^κ w(λ_i)⟨ P_nf,ϕ_i^n⟩_2(ϕ_i^n-P_nϕ_i)_2 ≤κmax_1≤ i≤κ |⟨ P_nf,ϕ_i^n⟩_2|ϕ_i^n-P_nϕ_i_2 ≤κβ_nP_nf_2 ≤κβ_n (f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ) ), where the final inequality follows from (<ref>). Meanwhile, the second term from (<ref>) can be bounded by ∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2 ≤ ∑_i=1^κ |w(λ_i)| |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2 ≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2 ≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2+∑_i = 1^κ |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_ℳ|P_nϕ_i_2. By the Cauchy-Schwarz inequality, (<ref>), (<ref>), and the assumption that n is large enough so that β_n≤ 1, we have |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2| ≤ P_nf_2 ϕ_i^n-P_nϕ_i_2≤β_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))≤(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)). And also by (<ref>) we have |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤γ_n^2f_𝐋^∞(ℳ)ϕ_i_𝐋^∞(ℳ), and P_nϕ_i_2≤ 1+γ_nϕ_i_𝐋^∞(ℳ). It is known (see, e.g., Appendix L of <cit.> and the references there) that ϕ_i_𝐋^∞(ℳ)≤ C_ℳ i^(d-1)/2d≤ C_ℳi^1/2. Therefore, for all i≤κ the assumption that n is large enough that γ_nκ^1/2≤ 1 implies |⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2| ≤ C_ℳγ^2_nκ^1/2f_𝐋^∞(ℳ)≤ C_ℳγ_n, and P_nϕ_i_2≤ 1+γ_n κ^1/2≤ 2. Therefore, if n is large enough such that γ_nκ^1/2<1, then the second term from (<ref>) can be bounded by ∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2 -⟨ f,ϕ_i⟩_ℳ)P_nϕ_i_2 ≤ ∑_i=1^κ |⟨ P_nf,ϕ_i^n⟩_2-⟨ P_nf,P_nϕ_i⟩_2|P_nϕ_i_2 +∑_i=1^κ|⟨ P_nf,P_nϕ_i⟩_2- ⟨ f,ϕ_i⟩_2|P_nϕ_i_2 ≤ ∑_i=1^κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))P_nϕ_i_2 +∑_i=1^κ C_ℳγ_nf_𝐋^∞(ℳ)P_nϕ_i_2 ≤ C_ℳ(κ(β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ)) ≤ C_ℳκ( β_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)). Therefore, combining Equations (<ref>) through (<ref>) yields w(𝐋_n)P_nf-P_nw(ℒ)f_2 ≤ ∑_i=1^κ (w(λ_i^n) - w(λ_i))⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n_2+∑_i=1^κ w(λ_i)(⟨ P_nf,ϕ_i^n⟩_2ϕ_i^n-⟨ f,ϕ_i⟩_ℳP_nϕ_i)_2 ≤ A_Lip(w)κ (α_nf_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ))+ C_ℳ(κβ_n(f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) + γ_nκf_𝐋^∞(ℳ)) ≤ C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) thus completing the proof of (<ref>). To prove (<ref>), we observe that since w_𝐋^∞([0,∞)), we have w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 ≤𝐱-P_nf_2 by the same reasoning as (<ref>). Therefore, by the triangle inequality, we have w(𝐋_n)𝐱-P_nw(ℒ)f_2 ≤ w(𝐋_n)𝐱-w(𝐋_n)P_nf_2 + w(𝐋_n)P_nf-P_nw(ℒ)f_2 ≤ 𝐱-P_nf_2 + C_ℳκ((A_Lip(w)α_n+β_n)f_𝐋^2(ℳ)+γ_nf_𝐋^∞(ℳ)) as desired. § THE PROOF OF THEOREM <REF> In order to prove Theorem <ref>, we need the following lemma which bounds the error in each step. The errors induced by the non-filtering steps of our network may be bounded by 𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2 ≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|, 𝐲̃_j,k^(ℓ)-P_ng̃_j,k^(ℓ)_2 ≤max_1≤ i≤ J_ℓ𝐲^(ℓ)_j,k-P_n g^(ℓ)_j,k_2∑_i=1^J_ℓ |α_j,i^(ℓ,k)|. 𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k_2 ≤𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k_2 To verify (<ref>), we observe that 𝐲_j,k^(ℓ)-P_ng_j,k^(ℓ)_2 =∑_i=1^C_ℓ𝐱̃^(ℓ)_j,kθ_i,k^(ℓ,j)-P_nf̃^(ℓ)_j,kθ_i,k^(ℓ,j)_2 ≤∑_i=1^C_ℓ|θ_i,k^(ℓ,j)|𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2 ≤max_1≤ i≤ C_ℓ𝐱̃^(ℓ)_j,k-P_nf̃^(ℓ)_j,k_2∑_i=1^C_ℓ |θ_i,k^(ℓ,j)|. The proof of (<ref>) is identical to the proof of (<ref>). For (<ref>), we see that since σ is non-expansive we have 𝐳^(ℓ)_j,k-P_nh^(ℓ)_j,k^2_2 =∑_i=1^n| (𝐳^(ℓ)_j,k)(i)-(P_nh^(ℓ)_j,k)(i)|^2 =∑_i=1^n| (𝐳^(ℓ)_j,k)(i)-h^(ℓ)_j,k(x_i)|^2 =∑_i=1^n| σ((𝐲̃^(ℓ)_j,k)(i))-σ(g̃^(ℓ)_j,k(x_i))|^2 ≤∑_i=1^n| (𝐲̃^(ℓ)_j,k)(i)-g̃^(ℓ)_j,k(x_i)|^2 =𝐲̃^(ℓ)_j,k-P_ng̃^(ℓ)_j,k^2_2. It follows from the definition of the reshaping operator that max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2 = max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2. Therefore, by Lemma <ref> we have max_k𝐱_k^(ℓ+1)-P_nf_k^(ℓ+1)_2 = max_j,k𝐳^(ℓ)_p,r-P_nh^(ℓ)_p,r_2. ≤ P_ng̃^(ℓ)_j,k-𝐲̃^(ℓ)_j,k_2. ≤ A^(ℓ)_2max_j,kP_n g^(ℓ)_j,k-𝐲^(ℓ)_j,k_2 ≤ A^(ℓ)_2A^(ℓ)_1max_j,kP_n f̃^(ℓ)_j,k-𝐱̃^(ℓ)_j,k_2 ≤ A^(ℓ)_2A^(ℓ)_1(max_k𝐱_k^(ℓ)-P_nf_k^(ℓ)_2 +ϵ_ℓ,n) Since 𝐱_0^(ℓ)-P_nf^(0)_k_2=0 for all k, we may use induction to conclude that 𝐱^(ℓ)_k-P_nf^(ℓ)_k_2≤∑_i=0^ℓ-1∏_j=i^ℓ-1 A_1^(j) A_2^(j)ϵ_i,n. § TRAINING AND IMPLEMENTATION DETAILS We trained all three models by minimizing the cross-entropy loss between predicted probabilities for each of the 10 categories and the ground truth category of each point cloud. We used the Adam optimizer for 200 epochs with a batch size of 32. The learning rate was selected according to validation performance and was chosen among 0.01 and 0.001. For each model, we used two layers of manifold networks, followed by a multi-layer perceptron classifier consisting of a single hidden layer. The hyper-parameters specific to each model and graph construction scheme are given in Table <ref>.
http://arxiv.org/abs/2307.07489v1
20230714172141
PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation
[ "Dapeng Hu", "Jian Liang", "Xinchao Wang", "Chuan-Sheng Foo" ]
cs.LG
[ "cs.LG", "cs.CV" ]
Probing multipartite entanglement through persistent homology [ August 12, 2023 ============================================================= Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains. However, the calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention. The conventional in-domain calibration method, temperature scaling (TempScal), encounters challenges due to domain distribution shifts and the absence of labeled target domain data. Recent approaches have employed importance-weighting techniques to estimate the target-optimal temperature based on re-weighted labeled source data. Nonetheless, these methods require source data and suffer from unreliable density estimates under severe domain shifts, rendering them unsuitable for source-free UDA settings. To overcome these limitations, we propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data. Unlike previous approaches that treat UDA calibration as a covariate shift problem, we consider it as an unsupervised calibration problem specific to the target domain. Motivated by the factorization of the negative log-likelihood (NLL) objective in TempScal, we generate a labeled pseudo-target set that captures the structure of the real target. By doing so, we transform the unsupervised calibration problem into a supervised one, enabling us to effectively address it using widely-used in-domain methods like TempScal. Finally, we thoroughly evaluate the calibration performance of PseudoCal by conducting extensive experiments on 10 UDA methods, considering both traditional UDA settings and recent source-free UDA scenarios. The experimental results consistently demonstrate the superior performance of PseudoCal, exhibiting significantly reduced calibration error compared to existing calibration methods. § INTRODUCTION In recent years, unsupervised domain adaptation (UDA)<cit.> has become a popular technique for effectively improving the generalization of deep learning models<cit.> from labeled source datasets to unlabeled out-of-domain target datasets. Remarkable strides have been made in the development of novel UDA methods <cit.>, practical UDA applications <cit.>, and real-world UDA settings <cit.>. Despite such advancements in UDA, there is a predominant focus on improving the performance of deep learning models in the target domain, while the calibration of target predictive uncertainty remains largely unexplored. This aspect is crucial for the deployment of UDA models in safety-critical decision-making scenarios, as deep learning models are known to suffer from the miscalibration problem, where confidence does not accurately reflect the likelihood of correctness <cit.>. Recent seminal works <cit.> have notably addressed the challenge of uncertainty calibration in UDA by focusing on the assumption of covariate shift <cit.>. They commonly employ importance weighting <cit.> to re-weight the labeled source validation data for target-adaptive temperature scaling (TempScal) <cit.>. However, these approaches have certain limitations that need to be addressed. Firstly, importance weighting is not reliable for large covariate shift and label shift scenarios <cit.>. Secondly, these methods require access to source data, making them unsuitable for privacy-preserving source-free UDA settings <cit.>. Lastly, the additional model training and density estimation involved in these methods make them more complex compared to the simple and post-hoc method of TempScal. To address these limitations, this paper aims to tackle the challenge of predictive uncertainty calibration in the unlabeled target domain without relying on source data. Unlike existing approaches that treat uncertainty calibration in UDA as a covariate shift problem, we adopt a distinct perspective by considering it as an unsupervised calibration problem in the target domain. Inspired by the pioneering work of Guo et al. <cit.>, we compare the target error and scaled target negative log-likelihood (NLL) in Figure <ref> (a). The figure clearly shows that the target NLL encounters significant overfitting during training of UDA, which aligns with similar observations in learning scenarios involving independent and identically distributed (IID) data <cit.>. Moreover, by factorizing the NLL objective employed in IID TempScal, we uncover that both correct and wrong predictions contribute to the final optimized temperature. As a result, we put forth the hypothesis that the target-domain oracle temperature can be accurately approximated by optimizing TempScal using data that share a similar accuracy-uncertainty distribution as real target data. Based on this hypothesis, we introduce our source-free approach called pseudo-target calibration (PseudoCal). PseudoCal begins by synthesizing a `labeled' dataset comprising pseudo-target samples and corresponding pseudo-target labels, generated using mixup <cit.> with real target samples and pseudo labels. Remarkably, we observe that the pseudo-target set exhibits a similar accuracy-confidence distribution to the real target set, as demonstrated in Figure <ref> (b). Such similarity can be attributed to the well-known  cluster assumption <cit.>, where samples located far away from the decision boundary are more likely to be correctly classified, while those near the decision boundary are prone to misclassification. Building on this assumption, we can establish correspondences between the pseudo-target set and real target set, where correctly predicted pseudo-target samples correspond to high-margin real samples, and wrongly predicted pseudo-target samples correspond to low-margin real samples. Such correspondences easily convert the unsupervised calibration problem into a supervised one. Consequently, our PseudoCal can estimate the Oracle real temperature by utilizing the pseudo temperature obtained through supervised TempScal optimization on the pseudo-target set. We make three primary contributions in this paper. * We address the understudied challenge of predictive uncertainty calibration in unsupervised domain adaptation (UDA) from a novel source-free perspective. Unlike existing approaches that treat UDA calibration as a covariate shift problem, we consider it as an unsupervised calibration problem in the target domain. This unique perspective unifies calibration in UDA across different settings, including scenarios with label shift or limited source access. * We introduce a novel source-free and post-hoc approach, namely pseudo-target calibration (PseudoCal), for UDA calibration. By leveraging the cluster assumption, PseudoCal successfully converts the unsupervised calibration problem into a more manageable supervised problem. PseudoCal achieves this by generating a `labeled' pseudo-target set through mixup and employing supervised TempScal optimization on this dataset to estimate the pseudo temperature used for the real target samples. * We conduct a comprehensive evaluation of PseudoCal and compare it with 7 existing calibration baselines in UDA. Specifically, we conduct experiments on 10 UDA methods across 5 challenging UDA scenarios, spanning diverse UDA benchmarks including both image classification and segmentation tasks. The calibration results consistently demonstrate that, on average, PseudoCal significantly outperforms all of the competing methods. § RELATED WORK Unsupervised domain adaptation. Unsupervised domain adaptation (UDA) has witnessed notable progress, evident in the proposal of various effective UDA approaches, the extension to diverse machine learning tasks, and the exploration of a wide range of real-world settings. UDA has been extensively studied in image classification tasks, where existing state-of-the-art methods can be categorized into two main lines: (1) distribution alignment across domains using specific discrepancy measures <cit.> or adversarial learning <cit.>, and (2) target domain-based learning with self-training <cit.> or regularizations <cit.>. Moreover, UDA has also been studied in object detection <cit.> and image segmentation <cit.>. Initially, UDA is based on the covariate shift assumption <cit.>, which means that two domains share similar label and conditional distributions but have different input distributions. This is commonly referred to as closed-set UDA. In recent years, several new practical settings have emerged to address additional challenges. These settings further consider label shift <cit.>, including partial-set UDA <cit.>, where some source classes are missing in the target domain, and open-set UDA <cit.>, where the target domain contains samples from unknown classes. Recently, there has been a growing interest in a novel practical setting called source-free UDA, which focuses on preserving source privacy. Source-free UDA encompasses two main settings: the white-box setting <cit.>, where the source model is available for target adaptation, and the more stringent black-box setting <cit.>, where the source model is solely utilized for inference purposes. Uncertainty calibration. The study of uncertainty calibration begins with techniques such as histogram binning <cit.>, isotonic regression <cit.>, and Platt scaling <cit.>, initially applied to binary classification tasks. Guo et al.<cit.> extends Platt scaling to multi-class classification and introduces matrix scaling (MatrixScal), vector scaling(VectorScal), and temperature scaling (TempScal). These post-hoc methods require a labeled validation set for calibration. On the other hand, there are methods that address calibration during model training, including Monte Carlo Dropout (MC-Dropout)<cit.>, Ensemble <cit.>, and Stochastic Variational Bayesian Inference (SVI) <cit.>. However, an evaluation in <cit.> reveals that these methods do not maintain calibration performance under dataset shift. In addition to calibration in IID settings and classification tasks, there is growing interest in calibration under distribution shifts <cit.> and in semantic segmentation tasks <cit.>. In this paper, we specifically address the calibration problem in single-source unsupervised domain adaptation (UDA). Various calibration methods have been proposed to handle domain distribution shifts. The first type utilizes importance weighting <cit.> to address calibration under covariate shift in UDA, exemplified by CPCS <cit.> and TransCal <cit.>. The second type involves perturbing the source validation set to serve as a general target set <cit.>. More recently, some methods <cit.> have utilized multiple source domains to calibrate the unlabeled target domain in UDA. Additionally, there are training-stage calibration methods that employ label smoothing <cit.> or optimize accuracy-uncertainty differentiably <cit.>. Among these methods, CPCS and TransCal are noteworthy as they specifically address transductive target calibration in UDA. For more general approaches like MC-Dropout and Ensemble, we compare our method directly with Ensemble because it consistently outperforms MC-Dropout. Table <ref> presents a comprehensive comparison of these typical UDA calibration methods. Our proposed method, PseudoCal, distinguishes itself through its simplicity, achieving source-free calibration with a single source model. § APPROACH In this paper, we address the problem of predictive uncertainty calibration in the context of unsupervised domain adaptation (UDA). We begin by introducing UDA with a C-way image classification task. UDA involves two domains: a labeled source domain and an unlabeled target domain. The source domain 𝒟_s={(x_s^i, y_s^i)}_i=1^n_s consists of n_s images x_s with their corresponding labels y_s, where x_s^i ∈𝒳_s and y_s^i ∈𝒴_s. The target domain 𝒟_t={x_t^i}_i=1^n_t contains unlabeled images x_t, where x_t^i ∈𝒳_t. The objective of UDA is to learn a UDA model ϕ that can predict the unknown ground truth labels {y_t^i}_i=1^n_t for the target domain, utilizing data from both domains simultaneously <cit.> or sequentially <cit.>. In addition to covariate shift, we also tackle label shift in partial-set UDA <cit.>. §.§ Calibration Metrics Next, we introduce the calibration problem and relevant metrics. When feeding a random sample (x, y) into the UDA model ϕ, we can obtain the predicted class ŷ and the corresponding softmax-based confidence p̂. Ideally, the confidence should accurately reflect the probability of correctness, expressed as ℙ (ŷ=y | p̂=p) = p, ∀ p ∈ [0, 1]. This perfect calibration, also known as Perfect, is impossible to achieve <cit.>. The widely used metric for evaluating calibration error is the expected calibration error (ECE) <cit.>. ECE involves partitioning probability predictions into M bins, with B_m representing the indices of samples falling into the m-th bin. It calculates the weighted average of the accuracy-confidence difference across all bins: ℒ_ECE = ∑_m=1^M |B_m| /n | acc ( B_m ) - conf ( B_m )| Here, n represents the number of samples, and for the m-th bin, the accuracy is computed as acc (B_m) = |B_m|^-1∑_i ∈ B_m1(ŷ_i = y_i), and the confidence is computed as conf (B_m) = |B_m|^-1∑_i ∈ B_mp̂_̂î. The introduction of additional popular metrics, such as NLL and Brier Score (BS) <cit.>, is provided in the appendix for further reference. §.§ Factorized Temperature Scaling Temperature scaling (TempScal) <cit.> is a widely employed calibration method in IID learning scenarios due to its simplicity and effectiveness. It is a post-hoc calibration technique that optimizes a temperature scalar, denoted as T, on a labeled validation set using the negative log-likelihood (NLL) loss between the temperature-flattened softmax predictions and the ground truth labels. For the unlabeled target domain in UDA, we define the calibration achieved by applying TempScal with raw predictions and unattainable target ground truths as the `Oracle' calibration. This serves as an upper bound for all other calibration methods. Let z represent the corresponding logit vector for the image input x, and let σ(·) denote the softmax function. The `Oracle' target temperature, denoted as T_o, can be obtained using the original temperature scaling optimization formulated as follows T_o = min_T 𝔼_(x_i, y_i) ∈𝒟_t ℒ_NLL(σ (z_i/ T), y_i ) Upon closer examination of TempScal, we observe that samples in the validation set can be classified as either correctly or wrongly predicted. Further, both types of samples have contrasting effects on the temperature optimization process. Specifically, the NLL minimization favors a small temperature to sharpen the confidence with correct predictions and a large temperature to flatten the confidence with wrong predictions. As a result, we can decompose Equation <ref> as follows: T_o = min_T N_c/N𝔼_(x_i, y_i) ∈𝒟_c ℒ_NLL( σ (z_i/ T), y_i ) + N_w/N𝔼_(x_j, y_j) ∈𝒟_w ℒ_NLL( σ (z_j/ T), y_j ), where 𝒟_c represents the dataset of correctly predicted samples, comprising N_c instances. Similarly, 𝒟_w denotes the dataset of wrongly predicted samples, consisting of N_w instances. §.§ PseudoCal: Pseudo-Target Calibration Motivation. We propose an innovative perspective on uncertainty calibration in UDA by reframing it as an unsupervised calibration problem in the target domain, completely independent of source data. Examining the factorization in Equation <ref>, we observe that if two data sets exhibit a similar correct-wrong pattern, they should also share a similar temperature when using TempScal. This observation motivates our hypothesis: if we can synthesize a labeled pseudo-target set with a similar correct-wrong pattern as the real target set, we can obtain a reliable estimation of the target oracle temperature even without applying TempScal directly to the real target. However, modeling the correct-wrong pattern of the real target directly is infeasible without target labels. The presence of domain shift often leads to significant deviations between predicted pseudo-labels and ground truth labels, rendering calibration with raw predictions and pseudo-labels unreliable. This is demonstrated in our experiments (Table <ref>). To address this issue, we propose synthesizing samples to approximate the accuracy-confidence distribution of the real target. In contrast to other augmentation techniques involving random perturbations <cit.> or vicinal perturbations <cit.>, we find that mixup provides a simple approach to generate controlled cross-cluster perturbations. Notably, the mixed samples naturally encompass both correct and wrong predictions, aligning with the cluster assumption <cit.> that we will discuss later in our analysis. Pseudo-target synthesis via mixup. We first generate a pseudo-target set by applying the mixup technique <cit.> to all target samples. Specifically, a pseudo-target sample x_pt and its label y_pt are obtained by taking a convex combination of a pair of real target samples x_t^i, x_t^j and the different predicted pseudo labels ŷ_t^i, ŷ_t^j. Consequently, we obtain a labeled pseudo-target set {(x_pt^i, y_pt^i)}_i=1^n_pt, where n_pt represents the amount. The process of pseudo-target synthesis is formulated as follows: x_pt = λ * x_t^i + (1 - λ) * x_t^j, y_pt = λ * ŷ_t^i + (1 - λ) * ŷ_t^j, where λ is a fixed scalar used as the mix ratio. Supervised calibration with temperature scaling. Using the generated labeled pseudo-target set {(x_pt^i, y_pt^i)}_i=1^n_pt, we can easily determine the optimal pseudo-target temperature through supervised methods such as TempScal. This estimated temperature serves as an approximation of the `Oracle' target temperature. With this step, we effectively transform the challenging unsupervised calibration problem associated with the real target set into a supervised one using the pseudo-target set. The source-free calibration pipeline of PseudoCal is illustrated in Figure <ref>, where the UDA model is utilized as a black box solely for inference. We compare the accuracy-confidence distribution between the real target and pseudo target, and present the calibration performance of PseudoCal in comparison to the vanilla no calibration case in Figure <ref> (b), providing strong evidence to support our hypothesis and validate the effectiveness of PseudoCal. Analysis through the lens of the cluster assumption. We offer an intuitive analysis of why mixup facilitates the synthesis of a pseudo-target set with a similar accuracy-confidence distribution to the real target. Our analysis is grounded in the widely accepted and theoretically justified cluster assumption <cit.>, which has been extensively applied in semi-supervised learning <cit.> and domain adaptation <cit.>. According to the cluster assumption, the decision boundary should reside in low-density regions of a learned cluster structure. This implies that samples located far from the decision boundary are more likely to be correctly classified, whereas those near the boundary are prone to misclassification. In a UDA task, the model ϕ is typically well-trained to learn a target structure. When employing mixup, the hard pseudo-target label of a pseudo-target sample is determined by the dominant real sample with a mix ratio exceeding 0.5. Consequently, we can expect the following correspondence in terms of the accuracy-confidence distribution between the real target and pseudo target: (i) pseudo-target samples with correct predictions matching pseudo-target labels indicate that their dominant real samples also possess correct real predictions, (ii) conversely, pseudo-target samples with wrong predictions mismatching pseudo-target labels indicate that their dominant real samples have wrong real predictions. This correspondence provides a certain degree of guarantee for the success of PseudoCal. We empirically demonstrate the robustness of such a guarantee across various UDA tasks. Remarkably, even when applied to a weak UDA model with a target accuracy of only 30%, PseudoCal consistently exhibits substantial improvements in calibration. § EXPERIMENTS §.§ Settings Datasets. For image classification, we adopt 5 popular UDA benchmarks of varied scales. Office-31 <cit.> is a small-scale benchmark with 31 classes in 3 domains: Amazon (A), DSLR (D), and Webcam (W). Office-Home <cit.> is a medium-scale benchmark with 65 classes in 4 domains: Art (Ar), Clipart (Cl), Product (Pr), and Real-World (Re). VisDA <cit.> is a large-scale benchmark with over 200k images across 12 classes in 2 domains: Training (T) and Validation (V). DomainNet <cit.> is a large-scale benchmark with 600k images. We take a subset of 126 classes with 7 tasks<cit.> from 4 domains: Real (R), Clipart (C), Painting (P), and Sketch (S). Image-Sketch <cit.> is a large-scale benchmark with 1000 classes in 2 domains: ImageNet (I) and Sketch (S). For semantic segmentation, we use Cityscapes<cit.> as the target domain and either GTA5<cit.> or SYNTHIA <cit.> as the source. UDA methods. We evaluate calibration on 10 UDA methods across 5 UDA scenarios. For image classification, we cover closed-set UDA methods (ATDOC <cit.>, BNM <cit.>, MCC <cit.>, CDAN <cit.>, SAFN <cit.>, MCD <cit.>), partial-set UDA methods (ATDOC <cit.>, MCC <cit.>, PADA <cit.>), the whit-box source-free UDA method (SHOT <cit.>), and the black-box source-free UDA method (DINE <cit.>). For semantic segmentation, we focus on calibrating source models without any adaptation. Calibration baselines. To provide a comprehensive comparison, we consider typical calibration baselines in UDA, including the no calibration baseline (No Calib.), IID calibration methods (MatrixScal <cit.>, VectorScal <cit.>, TempScal <cit.>), cross-domain calibration methods (CPCS <cit.>, TransCal <cit.>), and a general calibration method (Ensemble <cit.>). Implementation details. We train all UDA models using the official code until convergence on a single RTX TITAN 16GB GPU. We adopt ResNet-101 <cit.> for VisDA and segmentation tasks, ResNet-34 <cit.> for DomainNet, and ResNet-50 <cit.> for all other tasks. For PseudoCal, a fixed mix ratio λ of 0.65 is employed in all experiments. The UDA model is utilized for one-epoch inference with mixup to generate the pseudo-target set. The reported results are averaged over five random runs. §.§ Results We evaluate the calibration performance of PseudoCal across 5 UDA scenarios. For classification tasks, we report the average ECE results for UDA tasks with the same target domain in Tables <ref>-<ref>. For segmentation tasks, we take each pixel as a sample and report the results in Table <ref>. `Oracle' refers to the aforementioned `Oracle' calibration with target labels, and `Accuracy' (%) denotes the target accuracy of the UDA model. Closed-set UDA. We evaluate 6 UDA methods on 4 benchmarks for closed-set UDA. Specifically, we report the ECE for Office-Home in Table <ref>, ECE for both Office-31 and VisDA in Table <ref>, and ECE for DomainNet in Table <ref>. PseudoCal consistently achieves a low ECE close to `Oracle', significantly outperforming other calibration methods by a wide margin. On the evaluated benchmarks, PseudoCal shows average ECE improvements of 4.33% on Office-Home, 1.88% on Office-31, 2.77% on VisDA, and 5.95% on DomainNet when compared to the second-best calibration method. Partial-set UDA. We evaluate 3 partial-set UDA methods on Office-Home and report the ECE in Table <ref>. PseudoCal consistently performs the best on average and outperforms the second-best method (Ensemble) by a significant margin of 4.24%. Source-free UDA. We evaluate the popular source-free UDA settings using SHOT for the white-box setting and DINE for the black-box setting. We report the ECE for large-scale benchmarks DomainNet and Image-Sketch together in Table <ref> and compare PseudoCal with the other source-free method Ensemble. PseudoCal outperforms Ensemble on both benchmarks by significant margins, with 7.44% on DomainNet and 15.05% on Image-Sketch. Semantic segmentation. In addition to classification tasks, we evaluate PseudoCal on domain adaptive semantic segmentation tasks and report the ECE in Table <ref>. PseudoCal performs the best on average and demonstrates an average ECE improvement of 4.62% over the no-calibration baseline. §.§ Discussions Qualitative comparisons. We present reliability diagrams <cit.> of different calibration methods in Figure <ref> (a)-(b). PseudoCal consistently aligns with `Oracle'in both UDA settings, while the state-of-the-art method TransCal deviates significantly. Impact of mix ratio λ. We investigate the effect of the fixed mix ratio λ used in mixup, ranging from 0.51 to 0.9, on the ECE of two closed-set UDA methods (including SHOT) on DomainNet in Figure <ref> (c) and two partial-set UDA methods on Office-Home in Figure <ref> (d). We examine the mixup with both `Hard' labels (one-hot label), and `Soft' labels (soft predictions). We find that PseudoCal achieves optimal performance within a medium range of λ values, specifically between 0.6 and 0.7, regardless of the use of hard or soft labels. A λ closer to 0.5 generates more ambiguous samples, leading to increased wrong predictions, while a λ closer to 1.0 results in the opposite effect. To ensure simplicity, we adopt a value of 0.65 for λ with hard labels for all experiments. Robustness to backbones and metrics. In order to examine the robustness of PseudoCal across different backbones and calibration metrics, we assess its performance using ViT-B <cit.> as the backbone and present the results for three metrics in Table <ref>. The findings reveal that PseudoCal consistently achieves top performance regardless of the choice of backbone or calibration metric. Impact of pseudo label quality. Despite the low accuracy of pseudo labels (approximately 30%) on the `I→S' task in Table <ref>, PseudoCal consistently exhibits strong calibration performance, indicating its effectiveness even in the presence of low-quality pseudo labels. Ablation study on pseudo-target synthesis. In our PseudoCal method, we utilize input-level mixup with a fixed mix ratio (λ) to synthesize a pseudo-target sample by combining a pair of real samples with different pseudo labels. To conduct a thorough ablation study, we compare this data synthesis strategy with alternative choices, such as mixup between samples with the same pseudo label (referred to as PseudoCal-same), instance-based augmentations <cit.>, mixing at different levels <cit.>, using λ values sampled from Beta(0.3, 0.3) <cit.>, and directly utilizing pseudo-labeled real target samples <cit.>. A comprehensive comparison of all strategies is presented in Table <ref>. Our PseudoCal consistently outperforms the alternative options, benefiting from its superiority in accurately approximating the accuracy-confidence distribution of real target data. Illustration of the real-pseudo correspondence. In Figure <ref> (b), we present an illustration that highlights the remarkable similarity in the accuracy-confidence distribution between the real target and pseudo target. To provide a more comprehensive understanding of the correspondence, we delve into the sample-level analysis. Within each pair of real samples in the mixup operation, we establish a correspondence when both the mixed pseudo sample and its dominant real sample are either correctly predicted or incorrectly predicted, evaluated by their respective labels. To quantify the observed correspondence, we calculate the correspondence rate as a percentage by dividing the number of corresponding pairs by the total number of pseudo-target samples. The results of our evaluation, presented in Table <ref>, demonstrate that PseudoCal consistently exhibits a high correspondence rate exceeding 60% across different tasks with varied model accuracy. These findings provide further direct evidence in support of the existence of real-pseudo correspondence. Comparison with Ensemble. We compare PseudoCal with a general calibration method Ensemble, which involves averaging predictions from multiple independently trained models. Our comparison demonstrates that Ensemble and PseudoCal are the only two methods that consistently maintain stable calibration performance across different UDA tasks. Notably, PseudoCal further surpasses Ensemble in terms of performance gains and computational efficiency. Limitations and broader impacts. PseudoCal has the following limitations and potential negative societal impacts: (i) Like other calibration methods compared, PseudoCal may occasionally increase ECE when the initial ECE is already small (see →D in Table <ref>), which raises risks for safety-critical decision-making systems. (ii) While PseudoCal can handle the source-free calibration setting, it may face challenges in extreme cases with very few available target samples, such as only a single target sample. (iii) PseudoCal is partly dependent on the cluster assumption, and it may fail if the target pseudo label is extremely poor, i.e., performing similarly to random trials. (iv) PseudoCal is based on temperature scaling and may not be suitable for open-set settings where the confidence of unknown samples is determined by various thresholding methods rather than differential softmax. § CONCLUSION In conclusion, we have introduced PseudoCal, a novel source-free calibration method for addressing the challenge of predictive uncertainty calibration in unsupervised domain adaptation (UDA). By relying solely on unlabeled target data, PseudoCal treats UDA calibration as an unsupervised calibration problem, distinguishing it from previous approaches based on the covariate shift assumption. Through the generation of a labeled pseudo-target set that replicates the accuracy-confidence distribution of real target samples, PseudoCal effectively converts the unsupervised calibration problem into a supervised one, leveraging popular IID methods such as temperature scaling for calibration. Our comprehensive evaluations across diverse UDA settings, including source-free scenarios and semantic segmentation, consistently demonstrate the superior performance of PseudoCal compared to existing calibration methods. Notably, PseudoCal stands out in terms of both its simplicity and effectiveness, offering a promising solution for enhancing the calibration of UDA models in practical applications. neurips § ALGORITHM The PyTorch-style pseudocode for our validation method PseudoCal is provided in Algorithm <ref>. § SEMANTIC SEGMENTATION CALIBRATION DETAILS For our calibration experiments on semantic segmentation, we calibrate the models trained solely on the source domain (GTA5 <cit.> or SYNTHIA <cit.>) without any target adaptation. We treat each pixel as an individual sample in classification tasks for both mixup and temperature scaling. To address the computational complexity, we adopt the evaluation strategy suggested in previous studies <cit.> and randomly sample 20,000 pixels from each image (with resolutions such as 1920*720) for calibration. § ADDITIONAL CALIBRATION METRICS In addition to the Expected Calibration Error (ECE) <cit.> discussed in the main text, we also consider two other calibration metrics as follows. Let 𝐲_i represent the one-hot ground truth encoding for input sample x_i, and 𝐩̂_i denote the predicted probability vector output by the model ϕ. Negative Log-Likelihood (NLL) <cit.> is also known as the cross-entropy loss. The NLL loss for a single sample x_i is given by: ℒ_ NLL = - ∑_c=1^C 𝐲_i^c log𝐩̂_i^c Brier Score (BS) <cit.> can be defined as the squared error between the predicted probability vector and the one-hot label vector. The Brier Score for a single sample x_i is given by: ℒ_ BS = 1/C∑_c=1^C (𝐩̂_i^c - 𝐲_i^c)^2 In addition to the ViT results presented in the main text, we have observed consistent advantages of our PseudoCal method over existing calibration methods across all three calibration metrics: ECE, NLL, and BS. We choose to report the ECE results for most of the experiments as ECE <cit.> is one of the widely used calibration metrics. § FULL CALIBRATION RESULTS Due to space constraints in the main text, we have presented the average ECE results for tasks with the same target domain. For detailed calibration results of each task, please refer to Table <ref> to Table <ref>.
http://arxiv.org/abs/2307.06170v2
20230712135226
Exponential stability of damped Euler-Bernoulli beam controlled by boundary springs and dampers
[ "Onur Baysal", "Alemdar Hasanov", "Alexandre Kawano" ]
math.AP
[ "math.AP", "cs.NA", "math.NA", "math.OC" ]
1]Onur Baysalcor3fn1 [email protected] 2]Alemdar Hasanov fn2 [email protected] 3]Alexandre Kawanofn3 [email protected] [cor3]Corresponding author [fn1]Department of Mathematics, University of Malta, Malta [fn2]Department of Mathematics, Kocaeli University, Turkey [fn3]Escola Politécnica, University of São Paulo, São Paulo 05508900, Brazil [1]Department of Mathematics, University of Malta, Malta [2]Kocaeli University, 41001, Kocaeli, Turkey [3]University of São Paulo, São Paulo 05508900, Brazil In this paper, the vibration model of an elastic beam, governed by the damped Euler-Bernoulli equation ρ(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, subject to the clamped boundary conditions u(0,t)=u_x(0,t)=0 at x=0, and the boundary conditions (-r(x)u_xx)_x=ℓ=k_r u_x(ℓ,t)+k_a u_xt(ℓ,t), (-(r(x)u_xx)_x )_x=ℓ=- k_d u(ℓ,t)-k_v u_t(ℓ,t) at x=ℓ, is analyzed. The boundary conditions at x=ℓ correspond to linear combinations of damping moments caused by rotation and angular velocity and also, of forces caused by displacement and velocity, respectively. The system stability analysis based on well-known Lyapunov approach is developed. Under the natural assumptions guaranteeing the existence of a regular weak solution, uniform exponential decay estimate for the energy of the system is derived. The decay rate constant in this estimate depends only on the physical and geometric parameters of the beam, including the viscous external damping coefficient μ(x) ≥ 0, and the boundary springs k_r,k_d ≥ 0 and dampers k_a,k_v ≥ 0. Some numerical examples are given to illustrate the role of the damping coefficient and the boundary dampers. Damped Euler-Bernoulli beam, boundary springs and dampers, exponential stabilization, energy decay rate. § INTRODUCTION In his paper, we study the exponential stability of the system governed by the following initial boundary value problem for the non-homogeneous damped Euler-Bernoulli beam controlled by boundary springs and dampers: {[ ρ(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, (x,t) ∈Ω_T,; [4pt] u(x,0)=u_0(x),   u_t(x,0)=u_1(x), x∈ (0,ℓ),; [4pt] u(0,t)=u_x(0,t)=0, (-r(x)u_xx)_x=ℓ=k_r u_x(ℓ,t)+k_a u_xt(ℓ,t),; [4pt] (-(r(x)u_xx)_x)_x=ℓ=-k_d u(ℓ,t)-k_v u_t(ℓ,t), t∈ [0,T], ]. where Ω_T=(0,ℓ)×(0,T), ℓ>0 is the length of the beam and T>0 is the final time. Here and below, u(x,t) is the vertical displacement, r(x):=E(x)I(x)>0 is the flexural rigidity (or bending stiffness) of the beam while E(x)>0 is the elasticity modulus and I(x)>0 is the moment of inertia of the cross section. The non-negative coefficient μ(x) represents the viscous external damping. Furthermore, the following variables have engineering meanings: u_t(x,t), u_x(x,t), u_xt(x,t), u_xx(x,t), -(r(x)u_xx) and -(r(x)u_xx)_x are the velocity, rotation, angular velocity, curvature, moment and shear force, respectively <cit.>. The nonnegative constants k_r,k_d ≥ 0 and k_a,k_v ≥ 0 represent the boundary springs and dampers, respectively. The first boundary condition (-r(x)u_xx)_x=ℓ=k_r u_x(ℓ,t)+k_v u_xt(ℓ,t) at x=ℓ means the control resulting from the linear combination of rotation and angular velocity, and the second boundary condition (-(r(x)u_xx)_x)_x=ℓ=-k_d u(ℓ,t)-k_v u_t(ℓ,t) means the control resulting from the linear combination of displacement and velocity. In this context, the above constants k_r,k_d,k_a, k_v are defined also as the boundary controls. It should be emphasized in almost all flexible structures modeled by the Euler-Bernoulli equation, one or another special case of these boundary conditions is used (see <cit.> and references therein). Namely, it is shown in <cit.> that the generalized eigenvalues of the simplest undamped Euler-Bernoulli equation u_tt-u_xxxx=0 with boundary linear feedback control u_xx(ℓ,t)=-k_a u_xt(ℓ,t), u_xxx(ℓ,t)=k_v u_t(ℓ,t), form a Riesz basis in the state Hilbert space, which leads to exponential stability. Furthermore, for the case when k_r=k_a= k_d=0 and μ(x)=0, the Riesz basis property and the stability of the system was studied in <cit.>. The same issues were studied in <cit.> for the system (<ref>) with k_r=k_a=0. Other simplified versions of the model governed by (<ref>) have been used for the mast control system in the Control of Flexible Structures Program of NASA <cit.>. In <cit.>, the authors examine and prove for the first time that there is exponential stability in the situation where only rotational damping is present at the extreme of a cantilever beam, with applications to long flexible structures that are modeled by the Euler-Bernoulli equation. In <cit.>, the often encountered configuration in engineering practice, in which there is a finite number of serially connected beams, is analysed. In it, it is, the problem of proving uniform exponential stability when one damper is positioned at the extremes of the composite structure, or at some intermediate interconnecting node. This problem is of great interest for structural engineers. In all the above cited works, semigroup approach was used to obtain the Riesz basis property of the eigenfunctions, which is one of the fundamental properties of a linear vibrating system. It is well known that for such a Riesz system, the stability is usually determined by the spectrum of the associated operator. However, in the exponential stability estimate ℰ(t) ≤ M e^-ω tℰ(0), obtained in the above mentioned studies, the relationship of the decay rate parameter ω >0 with the physical and geometric parameters of the beam, including the damping coefficient and the boundary dampers, has not been determined. In addition, it does not seem possible to obtain this relationship anyway, due to the methods used in these studies. § ENERGY IDENTITITY AND DISSIPATIVITY OF SYSTEM (<REF>) We assume that the inputs in (<ref>) satisfy the following basic conditions: {[ ρ,μ, ∈ L^∞(0,ℓ), r ∈ H^2(0,ℓ),; [3pt] u_0∈ H^2(0,ℓ), u_1∈ L^2(0,ℓ),; [3pt] 0<ρ_0≤ρ(x)≤ρ_1,  0< r_0 ≤ r(x) ≤ r_1,; [3pt] 0≤μ_0≤μ(x)≤μ_1, x ∈ (0,ℓ); [3pt] k_r,k_a,k_d,k_v≥ 0,   k_a+k_v+μ_0>0. ]. Following the procedure described in <cit.>, one can prove that under conditions (<ref>), there exists a regular weak solution u∈ L^2(0,T; H^4(0,ℓ)), u_t∈ L^2(0,T; 𝒱^2(0,ℓ)) with u_tt∈ L^2(0,T;L^2(0,ℓ)) of problem (<ref>), where 𝒱^2(0,ℓ):={v∈ H^2(0,ℓ): v(0)=v'(0)=0 }. Assume that conditions (<ref>) are satisfied. Then the following energy identity holds: ℰ(t) + ∫_0^t ∫_0^ℓμ(x) u_τ^2 (x,τ) dx d τ [1pt] =ℰ(0)-k_a ∫_0^t u_xτ^2(ℓ,τ) d τ -k_v ∫_0^t u_τ^2(ℓ,τ) d τ, t∈[0,T], where ℰ(t)=1/2∫_0^ℓ [ ρ(x) u^2_t(x,t)+r(x) u^2_xx(x,t) ] dx [1pt] +1/2 k_r u_x^2(ℓ,t) +1/2 k_d u^2 (ℓ,t), t∈[0,T], is the total energy of system (<ref>) and ℰ(0)=1/2∫_0^ℓ [ ρ(x) ( u_1(x))^2 + r(x) ( u”_0(x))^2 ] dx [1pt] +1/2 k_r ( u'_0(ℓ))^2+1/2 k_d ( u_0(ℓ))^2 is the initial value of the total energy. Proof. Multiplying both sides of equation (<ref>) by u_t(x,t), integrating it over Ω_t:=(0,ℓ)× (0,t), using the identity (r(x)u_xx)_xx u_t = [(r(x)u_xx)_x u_t-r(x)u_xx u_xt]_x+ 1/2 (r(x)u_xx^2 )_t, we obtain the following integral identity: 1/2∫_0^t∫_0^ℓ (ρ(x) u_τ^2 )_τdx dτ +1/2∫_0^t ∫_0^ℓ (r(x)u_xx^2 )_τdx dτ [1pt] +∫_0^t ((r(x)u_xx)_x u_τ-r(x)u_xx u_xτ)_x=0^x=ℓ dτ +∫_0^t ∫_0^ℓμ(x) u_τ^2 dx d τ=0, for all t ∈ (0,T]. Using here the initial and boundary conditions (<ref>), we obtain: 1/2∫_0^ℓ [ρ(x) u^2_t+ r(x) u_xx ]dx +1/2 k_r u_x^2(ℓ,t) +1/2 k_d u^2 (ℓ,t) [1pt] +∫_0^t ∫_0^ℓμ(x) u_τ^2 dx d τ+k_a ∫_0^t u_xτ^2(ℓ,τ) d τ +k_v ∫_0^t u_τ^2(ℓ,τ) d τ [1pt] -1/2∫_0^ℓ [ρ(x) (u_1(x) )^2+ r(x) (u”_0(x) )^2 ]dx -1/2 k_r ( u'_0(ℓ))^2-1/2 k_d ( u_0(ℓ))^2=0, for all t ∈ (0,T]. This leads to (<ref>) with (<ref>) and (<ref>). Identity (<ref>) show that the increase in the damping parameters k_a, k_v ≥ 0 causes the energy ℰ(t) to decrease, Furthermore, from formula (<ref>) it follows that the increase of the spring parameters k_r, k_d ≥ 0 causes the energy to increase. If conditions (<ref>) are met, the formula below gives the rate at which the total energy decreases. d ℰ(t)/dt =-∫_0^ℓμ(x) u^2_tdx- k_a u_xt^2 (ℓ,t) - k_v u_t^2(ℓ,t), t∈ (0,T). Proof. In view of formula (<ref>) we have: d ℰ(t)/dt= ∫_0^ℓ [ ρ(x) u_tu_tt+r(x) u_xxu_xxt ] dx [1pt] +k_r u_x (ℓ,t) u_xt (ℓ,t) + k_d u(ℓ,t) u_t(ℓ,t), t∈[0,T]. Use here the (formal) identity ρ(x)u_tt=-μ(x)u_t-(r(x)u_xx)_xx to get d ℰ(t)/dt=- ∫_0^ℓμ(x) u^2_tdx- ∫_0^ℓ(r(x)u_xx)_xx u_t dx+∫_0^ℓ r(x) u_xxu_xxt dx [1pt] +k_r u_x (ℓ,t) u_xt (ℓ,t) + k_d u(ℓ,t) u_t(ℓ,t), t∈[0,T]. In the second right hand side integral, we employ the identity -∫_0^ℓ(r(x)u_xx)_xx u_t dx =-∫_0^ℓ r(x) u_xxu_xxtdx-k_d u(ℓ,t) u_t (ℓ,t) [1pt] - k_v u^2_t(ℓ,t) -k_r u_x(ℓ,t) u_xt (ℓ,t) - k_a u^2_xt(ℓ,t), t∈[0,T], which holds due to the boundary conditions in (<ref>). Substituting this identity in (<ref>) we arrive at the required formula (<ref>). Integrating (<ref>) over (0,t), t ∈ (0,T] we obtain the same energy identity (<ref>) rewritten in the following form: ℰ(0)-ℰ(t)=𝔧_μ (t)+𝔧_a (t) +𝔧_v (t), t∈[0,T], where . [ 𝔧_μ (t): = ∫_0^t∫_0^ℓμ(x) u^2_τ(x,τ) dx dτ,; [12pt] 𝔧_a(t): = k_a ∫_0^t u_xτ^2 (ℓ,τ)dτ,     𝔧_v (t): = k_v ∫_0^t u_τ^2(ℓ,τ)dτ, t∈[0,T]. ]. In particular, ℰ(t) ≤ℰ(0), t∈[0,T], that is, the energy of the system (<ref>) is dissipating with time. The above formula (<ref>) is a clear expression of the effect of the damping parameters μ(x), k_a and k_v on the rate of decrease of the total energy. In addition, the energy identity (<ref>) shows the degree of influence of these damping factors on the difference between the initial value ℰ(0) of the total energy and the value ℰ(t) of this energy at the time instant t∈ (0,T], through the integrals 𝔧_μ (t), 𝔧_a (t) and 𝔧_v(t) defined in (<ref>). § ENERGY DECAY ESTIMATE FOR SYSTEM (<REF>) Introduce the auxiliary function: 𝒥(t)= ∫_0^ℓρ(x) u u_tdx+1/2∫_0^ℓμ(x) u^2dx [2pt] +1/2 k_a u_x^2 (ℓ,t) +1/2 k_v u^2(ℓ,t), t∈[0,T], containing all damping parameters. We prove the formula d 𝒥(t)/dt= 2 ∫_0^ℓρ(x) u_t^2dx -2ℰ(t), t∈[0,T], which shows the relationship between the auxiliary function 𝒥(t) and the energy function ℰ(t) introduced in (<ref>). Taking the derivative of the function 𝒥(t) with respect to the time variable and using then the (formal) identity ρ(x)u_tt+μ(x)u_t=-(r(x)u_xx)_xx as above, we obtain: d 𝒥(t)/dt= ∫_0^ℓρ(x) u^2_tdx - ∫_0^ℓ(r(x)u_xx)_xx u dx [2pt] +k_a u_x (ℓ,t) u_xt (ℓ,t) +k_v u(ℓ,t) u_t (ℓ,t), t∈[0,T]. We employ here the following identity: -∫_0^ℓ(r(x)u_xx)_xx u dx = -∫_0^ℓ r(x) u^2_xxdx -k_d u^2(ℓ,t)-k_v u(ℓ,t) u_t(ℓ,t) -k_r u^2_x(ℓ,t)-k_a u_x(ℓ,t) u_xt (ℓ,t), t∈[0,T]. This yields: d 𝒥(t)/dt= ∫_0^ℓρ(x) u^2_tdx - ∫_0^ℓ r(x) u^2_xx dx [2pt] -k_d u^2(ℓ,t)-k_r u^2_x(ℓ,t), t∈[0,T]. With definition (<ref>) this implies the desired formula (<ref>). Under conditions (<ref>), the energy function ℰ(t) introduced in (<ref>) serves as lower and upper bounds to the auxiliary function 𝒥(t) introduced in (<ref>), that is -β_0 ℰ(t) ≤𝒥(t) ≤β_1 ℰ(t),  t∈[0,T], where . [ β_1=β_0 [1+ 1/√(ρ_1 r_0) (ℓ^2/2 μ_1+2/ℓ k_a + ℓ k_v ) ],; [14pt] β_0 =ℓ^2/2 √(ρ_1/r_0) .; ]. Proof. First we estimate the first right hand side integral in (<ref>). To this end, we employ the ε-inequality |∫_0^ℓρ(x) u u_tdx|≤ε/2 ∫_0^ℓρ(x) u_t^2dx + 1/2ε ∫_0^ℓρ(x) u^2dx, with the inequality ∫_0^ℓρ(x) u^2dx ≤ℓ^4 ρ_1/4 r_0∫_0^ℓ r(x)u_xx^2dx, to estimate the second right-hand side integral in above inequality. Choosing then the parameter ε>0 from the condition ε/2=ℓ^4 ρ_1/(8 r_0 ε) as ε= ℓ^2/2 √(ρ_1/r_0) , we obtain the following estimate: |∫_0^ℓρ(x) u u_tdx|≤ℓ^2/4 √(ρ_1/r_0) [ ∫_0^ℓρ(x) u_t^2dx + ∫_0^ℓ r(x) u_xx^2dx ]. For other right-hand side terms in formula (<ref>) for the auxiliary function 𝒥(t) we use the following inequalities: . [ 1/2∫_0^ℓμ(x) u^2 (x,t)dx ≤ℓ^4 μ_1/16 r_0∫_0^ℓ r(x)u_xx^2dx,; [10pt] 1/2 k_a u^2_x(ℓ,t)≤ℓ k_a/2 r_0∫_0^ℓ r(x)u_xx^2dx,; [10pt] 1/2 k_v u^2(ℓ,t)≤ℓ^3 k_v/4 r_0∫_0^ℓ r(x)u_xx^2dx, t∈ (0,T). ]. Taking into account (<ref>) and (<ref>) in (<ref>) we arrive at the following estimate 𝒥(t) ≤1/2 β_0 {∫_0^ℓρ (x)u^2_tdx . [2pt] + . [1+ ℓ^2/2 √(ρ_1r_0) μ_1 + 2/ℓ √(ρ_1r_0) k_a +ℓ/√(ρ_1r_0) k_v ] ∫_0^ℓ r(x)u^2_xxdx }, which leads to the upper bound 𝒥(t) ≤β_1 ℰ(t), t∈[0,T],  β_1 >0, with β_0, β_1>0 introduced in (<ref>). To find the lower bound for the auxiliary function 𝒥(t), we use again inequality (<ref>) in (<ref>) to conclude that 𝒥(t) ≥ - 1/2 β_0 {∫_0^ℓρ (x)u^2_tdx +∫_0^ℓ r(x)u^2_xxdx } + [2pt] 1/2∫_0^ℓμ(x) u^2 (x,t)dx + 1/2 k_a u_x^2 (ℓ,t) +1/2 k_v u^2(ℓ,t),  t∈[0,T], This leads to 𝒥(t) ≥ -β_0 ℰ(t), t∈ [0,T]. Thus, (<ref>) and (<ref>) imply the required lower and upper bounds (<ref>). The constants β_0,β_1>0 depend only on the geometric and physical parameters of a beam introduced in (<ref>), as formulas (<ref>) show. To establish the uniform energy decay estimate, we introduce the Lyapunov function: ℒ(t)=ℰ(t)+λ𝒥(t), t∈[0,T], where ℰ(t) and 𝒥(t) are the energy function and the auxiliary function introduced in (<ref>) and (<ref>), respectively, and λ>0 is the penalty term. Assume that conditions (<ref>) are satisfied. Then system (<ref>) is exponentially stable for any nonnegative values of the boundary spring and damper constants k_r, k_d, k_a, k_v≥ 0. That is, there are the constants M_d= 1+ β_1 λ/1- β_0 λ ,  σ=2 λ/1+β_1 λ , with 0<λ <min (1/ β_0, μ_0/(2ρ_1)). such that the energy ℰ(t) of system (<ref>) satisfies the following estimate: ℰ(t)≤ M_d e^-σ t ℰ(0), t∈[0,T], where μ_0,ρ_1>0 and β_0>0 are the constants introduced in (<ref>) and (<ref>), respectively, and ℰ(0)>0 is the initial energy defined in (<ref>). Proof. In view of (<ref>) we have: (1- λ β_0 ) ℰ(t) ≤ℒ(t) ≤ (1+ λ β_1 )ℰ(t), t∈[0,T]. In such a circumstance, we assume that the penalty term satisfies the following conditions: 0<λ <1/ β_0, β_0>0. Differentiating ℒ(t) with respect to the variable t∈ (0,T) and taking formulas (<ref>) and (<ref>) into account, we obtain: d ℒ(t)/dt+2 λℰ(t)= -∫_0^ℓ [μ(x)-2λρ(x) ] u_t^2dx [2pt] -k_a u^2_xt(ℓ,t)-k_v u^2_t(ℓ,t), t∈[0,T]. We require that μ(x)-2λρ(x)>0. Since μ(x) -2λρ(x) ≥μ_0-2λρ_1, the sufficient condition for this is the condition λ< μ_0/(2ρ_1). With (<ref>) this implies that the penalty term should satisfy conditions (<ref>). Then from (<ref>) we deduce that d ℒ(t)/dt+2 λℰ(t)<0, t∈[0,T]. With the inequality ℰ(t) ≥ℒ(t)/ (1+ λ β_1 ) this yields: d ℒ(t)/dt+2 λ/1+β_1 λ ℒ(t)<0, t∈[0,T]. Solving this inequality we find: ℒ(t)≤ e^-σ tℒ(0), t∈[0,T]. This yields the required estimate (<ref>) with the constants M_d,σ >0 introduced in (<ref>). In view of formulas (<ref>), the decay rate parameter σ>0 in the energy estimate (<ref>), obtained for the system governed by (<ref>) and controlled by boundary springs and dampers, clearly show the degree of influence of each of the damping parameters μ(x), k_a, k_v ≥ 0 in the dissipative boundary conditions on the energy decay. § SOME SPECIAL CASES Special cases of the general system (<ref>) described above are very common in practical applications of structures containing beam elements. In this section we deal with systems corresponding to special cases of the general system (<ref>) to investigate the influence of each damping factor. §.§ A cantilever beam fixed at one end and free at other Consider the simplest case when k_r=k_a=k_d=k_v=0 of system (<ref>), i.e. without the dissipative boundary conditions: {[ ρ(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, (x,t) ∈Ω_T,; [4pt] u(x,0)=u_0(x),   u_t(x,0)=u_1(x), x∈ (0,ℓ),; [4pt] u(0,t)=u_x(0,t)=0, (-r(x)u_xx)_x=ℓ=0,; [4pt] (-(r(x)u_xx)_x)_x=ℓ=0, t∈ [0,T]. ]. This is an initial boundary value problem for the damped cantilever beam. The exponential stability result for system (<ref>) directly follows from the results given in (<ref>)-(<ref>), {[ ℰ(t)≤ M_0 e^-σ_0 t ℰ(0), t∈[0,T],; [6pt] M_0= 1+ β_1 λ/1- β_0 λ ,  σ_0=2 λ/1+β_1 λ ,; [10pt] β_1=β_0 [1+ ℓ^2/4√(ρ_1 r_0) μ_1 ], β_0 =ℓ^2/2 √(ρ_1/r_0) ,; [14pt] 0<λ <min (1/ β_0, μ_0/(2ρ_1)), ]. assuming k_a=k_v=0 in (<ref>), and also k_r=k_d=0 in (<ref>). That is, the energy function corresponding to system (<ref>) is ℰ(t)=1/2∫_0^ℓ [ ρ(x) u^2_t(x,t)+r(x) u^2_xx(x,t) ] dx, t∈[0,T]. Formulas (<ref>) clearly show the nature of the influence of the viscous external damping coefficient μ(x), as a unique damping factor on the energy decay rate. §.§ A cantilever beam fixed at one end and attached to a spring at other This case corresponds to the zero values k_a=k_v=0 of the boundary damping parameters, and hence to the linear spring conditions at x=ℓ: {[ ρ(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, (x,t) ∈Ω_T,; [4pt] u(x,0)=u_0(x),   u_t(x,0)=u_1(x), x∈ (0,ℓ),; [4pt] u(0,t)=u_x(0,t)=0, (-r(x)u_xx)_x=ℓ=k_r u_x(ℓ,t),; [4pt] (-(r(x)u_xx)_x)_x=ℓ=k_d u(ℓ,t), t∈ [0,T]. ]. As in the previous case, the dissipativity of system (<ref>) is provided only by the viscous external damping given by the coefficient μ(x)>0. The same exponential stability result given in (<ref>) holds for system (<ref>). Furthermore, the energy function ℰ(t) corresponding to system (<ref>) is given by the same formula (<ref>) which, different from formula (<ref>), contains also the spring constants k_r,k_d ≥ 0. §.§ A cantilever beam fixed at one end and subjected two dampers at other Consider the case where both spring parameters in (<ref>) are zero, k_r=k_d=0: {[ ρ(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, (x,t) ∈Ω_T,; [4pt] u(x,0)=u_0(x),   u_t(x,0)=u_1(x), x∈ (0,ℓ),; [4pt] u(0,t)=u_x(0,t)=0, (-r(x)u_xx)_x=ℓ=k_a u_xt(ℓ,t),; [4pt] (-(r(x)u_xx)_x)_x=ℓ=-k_v u_t(ℓ,t), t∈ [0,T], ]. This is a mathematical model for the mast control system. The simplest version {[ m u_tt+EI u_xxxx=0, (x,t) ∈Ω_T,; [4pt] u(x,0)=u_0(x),   u_t(x,0)=u_1(x), x∈ (0,ℓ),; [4pt] u(0,t)=u_x(0,t)=0, (-EI u_xx)_x=ℓ=k_a u_xt(ℓ,t),; [4pt] (-EI u_xxx)_x=ℓ=-k_v u_t(ℓ,t), t∈ [0,T], ]. of this model for the undamped Euler-Bernoulli equation with constant coefficients was first studied in <cit.> within NASA's Program of Control of Flexible Structures, and then developed in <cit.>. In this model, the meaning of the boundary conditions at x=ℓ is that the shear force -EI u_xxx is proportional to velocity u_t, and the bending moment -EI u_xx is negatively proportional to angular velocity u_xt, while the values of the boundary dampers k_a, k_v ≥ 0 play the role of the proportionality factors. Thus, the rate feedback laws at x=ℓ reflect basic features of mast control systems with bending and torsion rate control. The uniform exponential stability result ℰ(t)≤ K e^-μ t ℰ(0) for the energy of vibration of the beam governed by system (<ref>) was proved in <cit.>. However, the constants K, μ>0 are not related to either physical or boundary damping parameters. Therefore, from this estimate, it is impossible to reveal the degree of influence of these parameters on energy decay. The results given in (<ref>)-(<ref>), with the same constants β_0,β_1>0 introduced in (<ref>), are valid also for system (<ref>). However, in the case of μ(x)=0, the sufficient condition (<ref>) for ensuring the inequality (<ref>) cannot be given over the coefficient μ(x). As a consequence, the above results can not be used for system (<ref>) with undamped Euler-Bernoulli equation. Assume that conditions (<ref>) are satisfied and μ(x)=0. Suppose, in addition that . [ u^2_xt(ℓ,t)+u^2_t(ℓ,t)>0,  t∈ [0,T]. ]. Then system (<ref>) is exponentially stable: ℰ(t)≤ M_d e^-σ t ℰ(0), t∈[0,T], where . [ M_d= 1+ β_1 λ/1- β_0 λ ,  σ=2 λ/1+β_1 λ ,; [12pt] β_1=β_0 [1+1/√(m EI)(2/ℓ k_a + ℓ k_v ) ], β_0 =ℓ^2/2 √(m/EI),; [14pt] 0<λ <min ( 1/β_0, inf_[0,T] [k^2_a u^2_xt(ℓ,t)+k^2_v u^2_t(ℓ,t) ]/2m ‖ u_t ‖_L^∞(0,T;L^2(0,ℓ))^2 ). ]. Proof. This theorem is proved in the similar way as the previous theorem, one only needs to derive the similar inequality for the Lyapunov function ℒ(t), through the boundary damping parameters k_a,k_v≥ 0. To this end, we use the following analogue d ℒ(t)/dt+2 λℰ(t)= 2λ m ∫_0^ℓ u_t^2dx -k_a u^2_xt(ℓ,t)-k_v u^2_t(ℓ,t), t∈[0,T] of formula (<ref>) of the Lyapunov function, which corresponds to system (<ref>). We require that 2λ m ∫_0^ℓ u_t^2dx -k_a u^2_xt(ℓ,t)-k_v u^2_t(ℓ,t)<0, t∈[0,T] Evidently, for the penalty term λ>0 satisfying the third condition of (<ref>), the above inequality holds for all t∈[0,T]. This implies inequality (<ref>). The uniform exponential decay estimate (<ref>) is obtained from this inequality in the same way as in the proof of Theorem <ref>. § NUMERICAL RESULTS In many cases, especially with variable coefficients, it is not easy to obtain an analytical solution for given problem (<ref>). Since the demand for finding energy function in (<ref>) involves u and its derivatives, these quantities should be calculated by an efficient numerical technique. In the next part, we first briefly summarize a robust one which is known the Method of Lines (MOL) approach that has been used successfully in many previous studies related to Euler-Bernoulli beam equations with the classical boundary conditions <cit.>-<cit.>. Then we both show the implementation of this method to the considered problem (<ref>) and demonstrate its high accuracy performance. §.§ Method of Lines Approach for the Numerical Solution of (<ref>) The MOL approach is based on two-stage decomposition principle for (<ref>); first, a semi-discrete formula is obtained from the variational formulation by Finite Element Method (FEM) with Hermite cubic shape functions, then the full discretization is generated by the second order appropriate time integrators. At the end of this process an algebraic system is obtained which is simple to solve. This technique is commonly employed, particularly in the case of dynamical multi-dimensional phenomena. Assume the finite dimensional space V_h⊂𝒱^2(0,ℓ) spanned by the Hermite cubic shape functions {ψ_i}_i=1^2M by uniformly discretizing spatial domain 0=x_1<x_2<⋯<x_M=ℓ (where h=ℓ/(M-1)). Consider the following semi-discrete Galerkin approximation of the problem (<ref>). For all t∈(0,T], find u_h(·,t)∈ V_h such that ∀ v_h∈ V_h, {[ (u_h,tt(·,t),v_h)+(μ(·)u_h,t(·,t),v_h)+a(u_h(·,t),v_h) =; .5cm -[k_d u_h(ℓ,t)+k_v u_h,t(ℓ,t)]v_h(ℓ) -[(k_r u_h,x(ℓ,t)+k_a u_h,xt(ℓ,t)])v_h,x(ℓ),; u_h(x,0)=0,   u_h,t(x,0)=0. ]. Here u_h(x,t) is the finite element approximation of the weak solution of (<ref>) and the symmetric bilinear functional a:H^2(0,ℓ)× H^2(0,ℓ)→ℝ is defined by a(w,v):=(r(·)w_xx,v_xx). The above second-order system of ODE can be approximately solved by using the following second-order backward finite difference approximations of u_h,t(x,t_j) and u_h,tt(x,t_j) with uniform temporal discretization. 0=t_1<t_2<⋯<t_N=T (where dt=T/(N-1)). u_h,t(x,t_j)≈∂_t^-U_h^j(x):=3u_h(x,t_j)-4u_h(x,t_j-1)+u_h(x,t_j-2)/2 h_t, u_h,tt(x,t_j)≈∂_tt^-U_h^j(x):=2u_h(x,t_j)-5u_h(x,t_j-1)+4u_h(x,t_j-2)-u_h(x,t_j-3)/h_t^2. By substituting these difference quotients for u_h(x,t) in the semi-discrete analogy (<ref>), one can get the following full-discrete algebraic problem of which solution U_h^j(x) is the approximate solution of (<ref>) at t=t_j such that U_h^j≈ u(·,t_j). For each j=1,2,...,N, find U_h^j∈ V_h such that ∀ v_h∈ V_h, {[ (∂_tt^- U_h^j,v_h)+(μ(·)∂_t^- U_h^j,v_h)+a(U_h^j,v_h)=; .5cm -[k_d U_h^j(ℓ)+k_v ∂_t^-U_h^j(ℓ)]v_h(ℓ) -[(k_r U_h,x^j(ℓ)+k_a ∂_t^-U_h,x^j(ℓ)])v_h,x(ℓ). ]. In order to compare numerical and exact solution on cartesian coordinates, we define U_h(x) as linear interpolation of set of all solutions {U_h^j∈ V_h}_j=1^N in temporal dimension such that for j=1, ⋯, N-1, U_h(x,t)|_[t_j,t_j+1]:=t-t_j/h_tU_h^j+1(x)-t-t_j+1/h_tU_h^j(x). In the next sectionin, we will test the success of this MOL technique with a problem for which we know the exact solution and develop a simple method for approximating the desired energy function. §.§ Test Problem The numerical studies below allow to analyze graphically the influence of the boundary control parameters on the stabilization of the beam vibration and on the asymptotic behaviour of the energy of the system. We also illustrate the verification of the theoretical results throughout the paper by this numerical test. {[ u_tt+2 u_t+((1+x)u_xx)_xx=0,    (x,t) ∈Ω_T:=(0,1)×(0,1.5],; [4pt] u(x,0)=x^2,   u_t(x,0)=-2 x^2,    x∈ (0,1),; [4pt] u(0,t)=u_x(0,t)=0,   t∈ [0,1.5]; -(1+x)u_xx|_x=1= 6 u_x(1,t)+3 u_xt(1,t)=-4exp(-2t),   t∈ [0,1.5]; [4pt] ((1+x)u_xx)_x|_x=1=4 u(1,t)+2 u_t(1,t)=2exp(-2t),   t∈ [0,2], ]. Here boundary spring parameters are k_r=6,  k_d=4 and the damper parameters are k_a=3,  k_v=2. The exact solution of (<ref>) and its first partial derivative with respect to x are u(x,t)=x^2exp(-2t) and u_x(x,t)=2 x exp(-2t). Numerical approximation of these functions can be found directly from the MOL technique in (<ref>) with the ratio of mesh parameters h_x/h_t=40. Corresponding approximate results are quite accurate and illustrated in Fig. <ref> and Fig. <ref>. The energy function ℰ(t) and auxiliary function 𝒥(t) for the given problem (<ref>) can be found respectively ℰ(t)=17.4 exp(-4t) and 𝒥(t)=6.8 exp(-4t). In order to find their approximations ℰ_h and 𝒥_h one needs to compute u_t(x,t) and u_xx(x,t). For this, we use centered difference quotient as follows. {[ u_t(x,t_j) ≈ ∂_t U_h(x,t_j) =[U_h(x,t_j+1)-U_h(x,t_j-1)]/2 h_t,; u_xx(x_i,t) ≈ ∂_x U_h,x(x_i,t) =[U_h,x(x_i+1,t)-U_h,x(x_i-1,t)]/2 h_x. ]. Approximate form of these derivatives ∂_t U_h(x,t) and ∂_x U_h,x(x,t) are obtained as a result of this centered difference approach and shown in Fig. <ref> and Fig. <ref> with their absolute errors. Therefore, by replacing all of these approximations represented in Figs. <ref>-<ref> with corresponding exact quantities in (<ref>) and (<ref>), we obtain desired approximations ℰ_h ≈ ℰ(t)=17.4 exp(-4t) and 𝒥_h ≈ 𝒥(t)=6.8 exp(-4t). The accuracy of these approximations are illustrated in Fig. <ref> (right). The upper bound of 𝒥(t) and ℰ(t) are follows from (<ref>) and (<ref>), respectively. Here β_0=1/2 and β_1=5, then 𝒥(t)=6.8 exp(-4t)≤β_1 ℰ(t)=87 exp(-4t). Similarly, λ=1, M_d∈(1,12) and σ∈(0,1/3) for the considered test problem (<ref>). Therefore, ℰ(t)=17.4 exp(-4t) ≤ 17.4 exp(-t/3)< M_dexp(-σ t) ℰ(0). All these numerical studies related to ℰ(t) and 𝒥(t) verify the theoretical results given in Proposition <ref> and Theorem <ref> and are illustrated in Fig. <ref> (left). § SOME PRELIMINARY CONCLUSIONS In this study we propose an approach which allows to obtain an explicit form of energy decay estimate for typical systems governed by Euler-Bernoulli beam controlled by boundary springs and dampers. As far as our knowledge extends, the relationship between the decay rate parameter σ>0 in the exponential stability estimate ℰ(t)≤ M_d e^-σ t ℰ(0) and the physical parameters of the problem, including the damping parameters and the boundary dampers, was established here for the first time in the literature. This achievement was made possible through the utilization of a mathematical method rooted in the Lyapunov stability approach. It can be shown that in addition to the above studied cases, the considered approach is also applicable for cases of pinned-pinned, pinned-sliding, sliding-pinned, and sliding-sliding boundary conditions, including various types of inputs on the boundary x=ℓ. 6 Chen:1987a G. Chen, S. G. Krantz, D. W. Ma, C. E. Wayne, and H. H. West, The Euler-Bernoulli beam equation with boundary energy dissipation, in Operator Methods for Optimal Control Problems, S. J. Lee, ed., Marcell-Dekker, New York, 1987, pp 67–96. Chen:1987b G. Chen, M.C. Delfour, A.M. Krall, G. Payre, Modeling, stabilization and control of serially connected beams, SIAM J. Control Optim. 25 (3) (1987) 526–546. Guo:2001 B.Z. Guo, R. Yu, The Riesz basis property of discrete operators and application to a Euler–Bernoulli beam equation with boundary linear feedback control, IMA J. Math. Control Inform. 18 (2001) 241–251. Guo:2002 B.Z. Guo, Riesz basis property and exponential stability of controlled Euler–Bernoulli beam equations with variable coefficients, SIAM J. Control Optim., 40(6) (2022) 1905–1923 Guo-Wang:2005 B.-Z. Guo, J.M. Wang, S.-P. Yung, On the C_0-semigroup generation and exponential stability resulting from a shear force feedback on a rotating beam, Systems Control Lett. 54 (2005) 557–574. Hasanov-Romanov:2021 A. Hasanov Hasanoglu, A.G. Romanov, Introduction to Inverse Problems for Differential Equations, 2nd ed, Springer, New York, 2021. AH:OB2016 A. Hasanov and O. Baysal, Identification of unknown temporal and spatial load distributions in a vibrating Euler-Bernoulli beam from Dirichlet boundary measured data, Automatica 71 (2016) 106–117. Hasanov-Baysal-Itou:2019 A. Hasanov, O. Baysal and H. Itou, Identification of an unknown shear force in a cantilever Euler-Bernoulli beam from measured boundary bending moment, J. Inverse Ill-posed Probl. 27(6)(2019 ) 859–876. Hasanov-Baysal-Sebu:2019 A. Hasanov, O. Baysal, C. Sebu, Identification of an unknown shear force in the Euler-Bernoulli cantilever beam from measured boundary deflection, Inverse Probl. 35(2019), 115008. Hasanov-Baysal:2019 A. Hasanov and O. Baysal, Identification of a temporal load in a cantilever beam from measured bending moment, Inverse Prob., 35(2019), 105005. Hasanov-Kawano-Baysal:2023 A. Hasanov, A. Kawano and O. Baysal, Reconstruction of shear force in Atomic Force Microscopy from measured displacement of the cone-shaped cantilever tip, arXiv:2306.03037v1, [math-ph] 2023. Inman:2015 D. J. Inman, Engineering Vibration, 4th Edn., Pearson Education Limited, 2014 Karagiannis:2015 D. Karagiannis, V. Radisavljevic-Gajic, Exponential stability for a class of boundary conditions on a Euler-Bernoulli beam subject to disturbances via boundary control, Journal of Sound and Vibration 446 (2019) 387–411. Krall:1989 A.M. Krall, Asymptotic stability of the Euler-Bernoulli beam with boundary control, J. Math. Anal. Appl. 137 (1989) 288–295. Lazzari:2012 B. Lazzari, R. Nibbi, On the exponential decay of the Euler–Bernoulli beam with boundary energy dissipation, J. Math. Anal. Appl. 389 (2012) 1078–1085. Sakthivel-Hasanov:2023 K Sakthivel, A Hasanov, D Anjuna, Inverse problems of identifying the unknown transverse shear force in the Euler-Bernoulli beam with Kelvin-Voigt damping, Journal of Inverse and Ill-Posed Problems (2023). https://doi.org/10.1515/jiip-2022-0053 Toure:2015 A. Touré, A. Coulibaly, A. A. H. Kouassi, Riesz basis and exponential stability for variable Euler-Bernoulli beams with variable coefficients and indefinite damping under a force control in position and vellocity. Electronic Journal of Differential Equations, 54 (2015), 1–20. Wang:2005 J. M. Wang, G. Q. Xu, S. P. Yung; Riesz basis property, exponential stability of variable coefficient Euler-Bernoulli beams with indefinite damping. IMA J. Appl. Math, 70 (2005), 459–477. Zubov:1957 V. L. Zubov, Methods of A. M. Liapunov and their Application. Leningrad 1957; (English Translation) P. Noordhoff Ltd. Gorning, Netherlands, 1964.
http://arxiv.org/abs/2307.04522v1
20230710124559
Accretion Flow Properties of EXO 1846-031 During its Multi-Peaked Outburst After Long Quiescence
[ "Sujoy Kumar Nath", "Dipak Debnath", "Kaushik Chatterjee", "Riya Bhowmick", "Hsiang-Kuang Chang", "Sandip K. Chakrabarti" ]
astro-ph.HE
[ "astro-ph.HE" ]
Dipak Debnath [email protected] [email protected] 0000-0002-6640-0301]Sujoy Kumar Nath Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India 0000-0003-1856-5504]Dipak Debnath Institute of Astronomy Space and Earth Science, AJ 316, Sector II, Salt Lake, Kolkata 700091, India Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan 0000-0002-6252-3750]Kaushik Chatterjee South Western Institute for Astronomical Research, Yunnan University, University Town, Chenggong, Kunming 650500, P. R. China Institute of Astronomy Space and Earth Science, AJ 316, Sector II, Salt Lake, Kolkata 700091, India Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan 0000-0002-7658-0350]Riya Bhowmick Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India 0000-0002-5617-3117]Hsiang-Kuang Chang Institute of Astronomy, National Tsing Hua University, Hsinchu 300044, Taiwan Department of Physics, National Tsing Hua University, Hsinchu 300044, Taiwan 0000-0002-0193-1136]Sandip K. Chakrabarti Indian Center for Space Physics, 466 Barakhola, Netai Nagar, Kolkata 700099, India We study the recent outburst of the black hole candidate EXO 1846-031 which went into an outburst in 2019 after almost 34 years in quiescence. We use archival data from Swift/XRT, MAXI/GSC, NICER/XTI and NuSTAR/FPM satellites/instruments to study the evolution of the spectral and temporal properties of the source during the outburst. Low energy X-ray flux of the outburst shows multiple peaks making it a multipeak outburst. Evolving type-C quasi-periodic oscillations (QPOs) are observed in the NICER data in the hard, hard intermediate and soft intermediate states. We use the physical Two Component Advective Flow (TCAF) model to analyze the combined spectra of multiple satellite instruments. According to the TCAF model, the accreting matter is divided into Keplerian and sub-Keplerian parts, and the variation in the observed spectra in different spectral states arises out of the variable contributions of these two types of accreting matter in the total accretion rate. Studying the evolution of the accretion rates and other properties of the accretion flow obtained from the spectral analysis, we show how the multiple peaks in the outburst flux arises out of discontinuous supply and different radial velocities of two types of accreting matter from the pile-up radius. We detect an Fe emission line at ∼6.6 keV in the hard and the intermediate states in the NICER spectra. We determine the probable mass of the black hole to be 12.43^+0.14_-0.03 M_⊙ from the spectral analysis with the TCAF model. We also estimate viscous time scale of the source in this outburst to be ∼ 8 days from the peak difference of the Keplerian and sub-Keplerian mass accretion rates. § INTRODUCTION A Low mass Black hole X-ray binary system (BHXRBs) consists of a stellar-mass main-sequence star orbiting around a stellar-mass black hole (SMBH). Transient BHXRBs spend most of their lifetime in a quiescent state, exhibiting very low X-ray luminosity (L_X ∼ 10^30-33 ergs/s; Tetarenko et al. 2016). Occasionally transient BHXRBs show bright outbursts, lasting for a few weeks to a few months, during which the source becomes extremely luminous (L_X ∼ 10^37-38 ergs/s; Tanaka & Shibazaki 1996). Due to its non-zero angular momentum, matter from the companion star accretes onto the black hole (BH), forming an inward-spiralling accretion disk. The accumulating matter heats up the disk, and the matter in the disk gets ionized causing thermal-viscous instability (Dubus et al. 2001; Lasota 2001). As a result of the instability, the viscosity of the ionized matter in the outer disk increases suddenly. This causes more angular momentum to be redistributed outward, and the accretion rate in the inner disk increases rapidly, triggering an outburst (Chakrabarti & Titarchuk 1995; Ebisawa et al. 1996; Chakrabarti 2013). During an outburst, low mass BHXRBs go through a succession of `accretion states', showing a rapid change in their temporal and spectral properties (Fender et al. 2004; Homan & Belloni 2005; McClintock & Remillard, 2006). During the initial phase of the outburst, the source luminosity is low and the energy spectrum can be approximated with a hard non-thermal power-law component. This state is called the hard state (HS). As the outburst progresses, the source transits through the hard-intermediate state (HIMS) and soft-intermediate state (SIMS), when the source luminosity gradually increases and the contribution of the low energy thermal photons increase, which gradually softens the spectrum. The source luminosity becomes maximum in the soft state (SS) when the spectrum is dominated by a thermal multicolor disk blackbody. After that, the source luminosity gradually decreases, and the source transits through SIMS, HIMS and finally, to the HS. Low-frequency peaked and narrow noise components called quasi-periodic oscillations (QPOs) has been observed in the power-density spectra (PDS) of most BHXRBs. Their properties (centroid frequency, Q-value, rms amplitude and noise) also vary depending on the spectral state, and Casella et al. (2005) have classified these LFQPOs into three types: A, B, and C. Generally, type-C QPOs with monotonically increasing or decreasing centroid frequency can be observed in the HS and HIMS, while no QPOs are observed in the SS. The evolution of these spectral and temporal properties are strongly correlated, which is manifested in the `Hardness-Intensity Diagram' (HID; Belloni et al. 2005; Debnath et al. 2008) or the `Accretion Rate Ratio-Intensity Diagram' (ARRID; Jana et al. 2016). Two separate mechanisms are responsible for the production of low and high-energy X-ray radiation from the accretion disks. An optically thick, geometrically thin Keplerian flow dissipates the gravitational energy of the accreting matter through viscosity and emits multicolor thermal blackbody photons (Novikov & Thorne 1973; Shakura & Sunyaev 1973). When these low-energy photons get intercepted by a hot electron cloud, they get repeatedly inverse Comptonised and are emitted as high-energy X-rays (Sunyaev & Titarchuk 1980, 1985). While there is general agreement about the emission mechanisms, the actual nature of the hot electron cloud or the Compton cloud has been a matter of debate. According to the Two-Component Advective Flow (TCAF) model (Chakrabarti & Titarchuk 1995; Chakrabarti 1997, Chakrabarti 2018), the CENtrifugal pressure supported BOundary Layer (CENBOL) acts as the Compton cloud. This CENBOL is formed near the black hole when the low viscous, freely falling sub-Keplerian matter comes to a halt as the outward centrifugal pressure becomes comparable to the inward gravitational force, and it forms a standing or oscillating shock. The post-shock region becomes hot and puffs up and forms a torus-like region of hot ionised matter. In the equatorial region, the viscosity remains higher than a certain critical value to maintain Keplerian angular momentum, and this Keplerian matter becomes optically thick and emits the multicolor soft photons which are then partially intercepted by the CENBOL and emitted as hard non-thermal photons. In the TCAF model, any observed spectrum depends on four independent flow parameters, i.e. the accretion rates of the Keplerian and the sub-Keplerian matter, the location of the shock which is the outer boundary of CENBOL, and the ratio of the post-shock to pre-shock matter densities (compression ratio). Additionally, it also depends on the mass of the BH and a normalization factor which is the ratio of emitted to observed X-ray spectrum, both of which are constants for a given source. As an outburst starts, the faster and hotter sub-Keplerian matter rushes towards the BH and forms the CENBOL which increases the hard Comptonised flux. The Keplerian matter, which has a low velocity due to the higher viscosity, gradually moves towards the BH and cools down the CENBOL. The CENBOL region shrinks in size, the hard photon flux decreases and the spectra become gradually softer. As the outer boundary of the CENBOL oscillates (e.g. due to a resonance between the Compton cooling and compressional heating), the Comptonized hard X-ray intensity also varies which gives rise to the observed quasi-periodic oscillations. This CENBOL also acts as the base of the jet/outflows. To study how the physical flow parameters vary during an outburst and to estimate the intrinsic parameters of the BH, this TCAF model has been incorporated into the spectral analysis software package pcrXSPEC (Arnaud, 1996) as a local additive table model (Debnath et al. 2014, 2015). So far, the TCAF model has been used to study the accretion flow dynamics of more than fifteen BHXRBs (Mondal et al. 2016; Debnath et al. 2017; Chatterjee et al. 2021). Intrinsic parameters, like the BH mass and its distance have been estimated (Molla et al. 2017; Chatterjee et al. 2019; Jana et al. 2020a; Nath et al. 2023). The origin of QPOs and jet/outflows has also been successfully studied using this model (Mondal et al. 2015; Chakrabarti et al. 2015; Chatterjee et al. 2016, Jana et al. 2017; Debnath et al. 2021) Galactic X-ray transient EXO 1846-031 was first discovered by EXOSAT during its outburst in 1985 (Parmar & White 1985). Based on the ultra-soft component in the spectra of this outburst, Parmar et al. (1993) indicated the source EXO 1846-031 is a BH candidate. After the first outburst, the source remained in quiescence for almost 34 years. Recently, the source was again found to be in outburst by MAXI on 2019 July 23 (Negoro et al. 2019). Evolving Type-C QPOs were observed in the Insight-HXMT and NICER data (Liu et al. 2021) which is generally observed in BHXRBs. From strong reflection features in the NuSTAR spectrum, Draghis et al. (2020) suggested EXO 1846-031 to be a BH with nearly maximal spin (a=0.99^+0.002_-0.001) with a disk inclination of θ≈73^∘. From Insight-HXMT and NuSTAR data, Wang et al. (2021) found signatures of an ionised disk wind with velocities up to 0.06c. They suggest EXO 1846-031 is a low inclination system with θ≈40^∘. Ren et al. (2022) investigated the spectral evolution from Insight-HXMT data and suggested that the maximal spin found by Draghis et al. (2020) might be affected by choice of a different hardening factor (f_col). Evidence of the presence of a pair of 3:2 ratio high-frequency quasi-periodic oscillations (HFQPO) was found, and based on this the probable mass of the source was determined to be 3.4±0.2  M_⊙ (Strohmayer & Nicer Observatory Science Working Group 2020). Analysing the radio observations from MeerKAT, VLA and AMI-LA, Williams et al. (2022) suggested a distance range of 2.4–7.5 kpc, and a jet speed of β_int=0.29c. We study the spectral and temporal properties of EXO 1846-031 during its 2019 outburst using Swift/XRT, Swift/BAT, MAXI/GSC, NICER/XTI and NuSTAR/FPM data with the TCAF model in this paper. We discuss the observation, data reduction, and analysis procedures briefly In 2. In 3 we present the results of our analysis. In 4, we discuss the obtained results and draw conclusions. 1.0cm § OBSERVATION AND DATA ANALYSIS §.§ Observations We study the 2019-2020 outburst of EXO 1846-031 using archival data from Swift (Gehrels et al. 2004), NICER (Gendreau et al. 2012), MAXI (Matsuoka et al. 2009), and NuSTAR (Harrison et al. 2013) satellites. We study the evolution of the X-ray fluxes in the soft and hard energy bands and their ratios using MAXI/GSC (2-10 keV) and Swift/BAT (15-50 keV) data of ∼ 10 months from 2019 July 9 (MJD=58673) to 2020 April 10 (MJD=58949). For the detailed temporal and spectral study, we use data from Swift/XRT, NICER/XTI, MAXI/GSC and NuSTAR/FPM satellites/instruments. Although NICER and Swift monitored the source regularly in the rising phase of the outburst, during the declining phase, there is a data gap of ∼ 3 months for Swift and ∼ 4 months for NICER. We use 14 data of NICER/XTI (1-11 keV) and 11 data of Swift/XRT (1-10 keV) for spectral analysis. To study the spectra in a wider energy band, we also use MAXI/GSC (7-20 keV) and NuSTAR/FPM (4-79 keV) simultaneously with NICER and Swift data. A detailed log of the observations is given in Table 1. §.§ Data Reduction §.§.§ Swift Swift/XRT window timing (WT) mode data were used in our analysis. Level 1 data files obtained from the archive are processed with the qcrXRTPIPELINE task to produce Level 2 clean event files. A circular region of radius 30” around the source location is then used to extract the source spectra and a region of the same radius is chosen away from the source to extract the background spectra using the tool qcrXSELECT. ARF files are created using the tool qcrXRTMKARF and corresponding RMFs are obtained from the qcrCALDB. Using the qcrGRPPHA tool, the spectra are rebinned to have at least 20 counts/bin. Swift/BAT daily lightcurves are obtained from the Swift https://swift.gsfc.nasa.gov/results/transients/weak/EXO1846-031/website. §.§.§ NICER NICER is an external payload attached to the International Space Station which has an X-ray timing instrument (XTI; Gendreau et al. 2012) working in the energy range 0.2-12 keV with a timing resolution of ∼100 ns and spectral resolution of ∼85 eV at 1 keV. For analysis, the Level 1 data files are processed with qcrnicerl2 script in the latest caldb environment (ver. xti20221001) to obtain Level 2 clean event files. The command qcrbarycorr is then used to apply barycentric correction to the event files. The lightcurves and spectra are extracted from these barycentre-corrected event files using the qcrXSELECT task. The qcrnibackgen3C50 tool (Remillard et al. 2022) is then used to simulate the background corresponding to each observation. The spectra are then rebinned to have at least 20 counts/bin with the qcrGRPPHA task. §.§.§ NuSTAR NuSTAR raw data from the web archive is reduced with the NuSTAR data analysis software (qcrNuSTARDAS, version 1.4.1). Cleaned event files are produced using the qcrnupipeline task in the presence of the latest calibration files. With the qcrXSELECT task, a circular region of 60” centred at the source coordinates is chosen as the source region, and a circular region with the same radius away from the source location is chosen as the background region. The qcrnuproduct task is then used to extract the spectrum, ARF and RMF files. The extracted spectra are then rebinned to have at least 30 counts/bin with the qcrGRPPHA task. §.§.§ MAXI MAXI/GSC spectra are obtained using the http://maxi.riken.jp/mxondem/MAXI on-demand process web tool (Matsuoka et al. 2009). To study the evolution of the X-ray fluxes, daily average lightcurves are obtained from the MAXI http://maxi.riken.jp/star_data/J1849-030/J1849-030.htmlwebsite. §.§ Data Analysis Daily average light curve data of MAXI/GSC and Swift/BAT are used to study the variation of the X-ray flux in various energy bands throughout the outburst. To study the hardness variations, we use two types of hardness ratios, namely hardness ratio 1 (HR1) i.e. the ratio of 15-50 keV Swift/BAT flux in mCrab to 2-10 keV MAXI/GSC flux in mCrab, and hardness ratio 2 (HR2) i.e. the ratio of 4-10 keV to 2-4 keV MAXI/GSC flux. To search for the presence of LFQPOs, we use the qcrpowspec task to generate power density spectra (PDS) from 0.01 s time binned light curves of NICER. The light curve of a total observation is separated into a number of intervals, each of which contains 8192 newbins. For each interval, a PDS is created, and these individual PDSs are averaged to generate a single PDS which was then geometrically rebinned. We model the PDSs with multiple Lorentzian models in qcrXSPEC version 12.11.1 (Arnaud 1996) to account for the broadband noise, QPOs and its harmonics. From the fits we obtain the QPO frequencies (ν_QPO), width (Δν), Q-value (Q=ν_QPO/Δν) and RMS (%) amplitude. We utilize HEASARC's spectral analysis software package qcrXSPEC version 12.11.1 (Arnaud 1996) for analyzing the spectra. All the spectra are fitted with the TCAF model based local additive table model (Debnath et al. 2014). To fit spectra using the TCAF model, four input flow parameters are essential: (1) the Keplerian disk accretion rate (ṁ_d in Ṁ_Edd), (2) the sub-Keplerian halo accretion rate (ṁ_h in Ṁ_Edd), (3) the shock location (X_s in Schwarzschild radius r_s=2 GM_BH/c^2), and (4) the dimensionless shock compression ratio (R = ρ_+/ρ_-, ratio of the post-shock to the pre-shock matter density). In addition, one system parameter, i.e., the mass of the BH (M_BH in M_⊙) and one instrument parameter, i.e. the model normalization (N) are required. To account for the interstellar absorption, we use the qcrTBabs model with qcrvern cross-sections (Verner et al. 1996) and qcrwilm abundances (Wilms et al. 2000). We use the qcrsmedge model to account for the instrumental features in the NICER spectra at ∼1.8 keV. § RESULTS After almost 34 years in quiescence, EXO 1846-031 again went into an outburst on 2019 July 23 (MJD 58687) which lasted for ∼10 months. To examine the nature of the outburst and the accretion flow properties during the outburst, we carried out a detailed temporal and spectral study using data from multiple satellites. The results of the study are presented below. §.§ Temporal Properties To study the outburst profile in different energy bands and the variation of hardness ratios, we use MAXI/GSC and Swift/BAT daily light curve data. To study the low timescale variability features, we use NICER/XTI data due to its high temporal resolution. §.§.§ Outburst Profile and Hardness Ratios We show the variation of X-ray fluxes in different energy bands and their hardness ratios from 2019 July 9 (MJD=58673) to 2020 April 10 (MJD=58949) in various panels of Fig. 1. The variation of the Swift/BAT (15-50 keV) flux and the MAXI/GSC (2-10 keV) flux is shown in panel (a), while the variation of their hardness ratio (HR1) is shown in panel (b). Likewise, panel (c) shows the variation of MAXI/GSC flux in lower (2-4 keV) and higher (4-10 keV) energy bands while panel (d) shows the variation in their hardness ratio (HR2). From the Figure, we can observe that at the start of the outburst, both soft and hard fluxes increased rapidly, and the 15-50 keV Swift/BAT flux attained a maximum on MJD 58697, roughly 8 days before the softer (2-4 keV and 4-10 keV) MAXI/GSC fluxes. The hardness ratios (HR1 and HR2) also increased and attained a maximum around MJD 58691 and decreased quickly to a low level. After the initial maximum, the Swift/BAT flux slowly decreased and decayed into quiescence at the end of the outburst. On the other hand, after the maximum around MJD 58705, the MAXI/GSC fluxes (in different energy bands) decreased for ∼13 days and then started to increase again. They attained a maximum around MJD 58736 and then decreased with an almost constant rate for ∼65 days. After that, the GSC fluxes remained at a constant low level till the end of the outburst. Looking at the outburst profile, we can see that this 2019 outburst of EXO 1846-031 has shown two stronger flux peaks in the rising phase and two very weak peaks in the declining phase. To estimate the total amount of flux released during each of the peaks, we fit the 2-10 keV MAXI/GSC lightcurve using FRED profiles (Kocevski et al. 2003). A combination of four FRED profiles are used to fit the complete outburst (Fig. 2) (see, Chakrabarti et al. 2019, Bhowmick et al. 2021, Chatterjee et al. 2022 for more details). In the Fig. 2, the blue curve marks the combined fit of the entire outburst and the red curves mark individual FRED fitted peaks of the outburst. We choose 12 mCrab as the threshold of flux for the outburst. The horizontal black line indicates the 12 mCrab flux value. Two vertical lines mark the start and the end of the outburst when the X-ray flux rises above and below this 12 mCrab threshold. The total integrated X-ray flux (IFX_tot) of the complete outburst calculated from the combined fit is 39.70^+3.29_-3.05 Crab. The individual integrated flux values (IFX) of the first, second, third and fourth peaks are 6.31^+0.26_-0.25 Crab, 30.82^+2.60_-2.42 Crab, 1.77^+0.60_-0.38 Crab and 0.80^+0.01_-0.16 Crab respectively. IFX values depict the amount of energy release in each peaks. Comparing the IFX values of the four peaks, we can conclude that maximum amount of energy was released during the second peak, i.e., maximum amount of matter got cleared during the time period of the second peak of the outburst. §.§.§ Power Density Spectra (PDS) To study the presence and evolution of LFQPOs during the outburst, we use 0.01 s time-binned NICER light curves. We use zero-centred Lorentzian models to fit the broad noise components and narrow Lorentzians to fit the QPO peaks to determine the centroid frequencies, Q-values, rms amplitudes, etc. We find the presence of QPOs in 19 NICER observations in the initial phase of the outburst. The observed QPOs can be classified as type-C which are characterized by a Q-value of ∼ 7-12 and an amplitude of ∼3–16 % rms that are superposed on a broad flat-top noise (Casella et al. 2005). Figure 3 shows a representative pds where a QPO of 3.24 Hz can be seen along with its harmonic at 6.52 Hz. The QPOs are found in the hard, the hard intermediate and the soft intermediate states which are discussed in detail in later sections. §.§ Spectral Properties We use data from Swift/XRT, NICER/XTI, MAXI/GSC and NuSTAR/FPM for spectral analysis in a broad 1-79 keV energy range. We mainly use the absorbed TCAF model to study the spectra. We use the qcrTBabs model for absorption where the hydrogen column density (N_H) was kept free. We found the N_H to vary between 5.12×10^22 cm^-2 and 10.94×10^22 cm^-2 during our analyses period. In the NICER spectra, edge-like residuals are seen at ∼1.8 keV corresponding to the Silicon K edge which is an instrumental feature typical for Si-based detectors (Alabarta et al. 2020, Miller et al. 2018). We use the qcrsmedge model to account for it. An Fe-Kα emission line at ∼6.4 keV is also observed in the NICER spectra of the initial rising phase which was fitted using the qcrGaussian model. We jointly fit the XRT+GSC spectra with qcrconstant*TBabs*(TCAF) model (Fig. 4a) and the NICER+GSC spectra with qcrconstant*TBabs*smedge(TCAF) or qcrconstant*TBabs*smedge(TCAF+gaussian) model (Fig. 4b). In the NICER+NuSTAR spectra, a dip is observed at ∼10 keV in the NuSTAR data. At first, we fit the spectra with qcrconstant*TBabs*smedge(TCAF+Gaussian) model ignoring the dip, and obtain χ^2/DOF=1.79. To improve the statistic, we use the qcrgabs model to account for the dip and get a good statistic with χ^2/DOF=0.91. The corresponding spectra are shown in Fig. 5(a–b). Detailed results of our spectral analysis are shown in Table 2. §.§.§ Evolution of the Spectral states The evolution of various temporal and spectral parameters of our analysis with the TCAF model shows that the source has evolved through different spectral states in this outburst. We get a rough estimation of the state evolution from the outburst profile and the variation of HRs. From the variation of the spectral parameters, we get a clearer picture of the state evolution as they show the actual evolution of the accretion flow dynamics, e.g. the change in the disk and halo accretion rates, the change of the position of the shock and its strength, etc. In the Figure 6, we show the variation of the disk accretion rate (ṁ_d), the halo accretion rate (ṁ_h), the total accretion rate (ṁ_d + ṁ_h) and the accretion rate ratio (ARR = ṁ_h/ṁ_d). In the Figure 7, we show the variation of the best fitted mass parameter (M_BH), the shock location (X_s), the shock compression ratio (R) alongwith the evolution of the QPO centroid frequency. Here we discuss the spectral states in detail. (1) Rising Hard State (HS): The source was in the hard state when we start our spectral analysis on 2019 July 31 (MJD 58695). The total accretion rate was high in this state, and the maximum part of the accreting matter was sub-Keplerian as the ṁ_h was higher than the ṁ_d by almost ∼3 times. The ARR was also high in this state, which started to decrease gradually as ṁ_d started to increase and ṁ_h started to decrease as the outburst progressed. The shock started to move towards the BH from a faraway location (460r_s), but its strength was almost constant in this period (R∼1.5). Two LFQPOs were found in this state whose centroid frequency increased from 0.25 Hz to 0.41 Hz. High HR was also observed in this state as the hard flux (Swift/BAT) dominated the soft flux (MAXI/GSC). The source remained in this state until 2019 August 2 (MJD 58697). (2) Rising Hard Intermediate State (HIMS): After MJD 58697, the total accretion rate started to decrease as the previously dominant ṁ_h started to decrease rapidly. The total accretion rate began to increase again after 2019 August 5 (MJD 58700) as ṁ_d started to increase and became dominant. The ARR decreased steadily in this state. Likewise, the shock moved inward rapidly, moving from ∼325 r_s to ∼37 r_s in ∼7 days with decreasing strength. Nine LFQPOs were found in this state whose centroid frequency increased rapidly to ∼7 Hz. The HR decreased in this state as the dominating hard flux began to decrease and soft flux increased steadily. The source stayed in this state until 2019 August 8 (MJD 58703). (3) Rising Soft Intermediate State (SIMS): The total accretion rate decreased and became roughly constant at a low level after MJD 58703. Both the ṁ_d and ṁ_h became steady, with ṁ_d dominating over the ṁ_h. The shock ceased to move towards the BH and came to a halt at ∼35r_s and its strength also became constant. We found eight LFQPOs during the initial part of this state, with their centroid frequency showing a slowly decreasing trend. The hard flux and the soft flux both decreased in this state, causing the HR to become low. This state of the outburst continued until 2019, August 31 (MJD 58726). (4) Soft State/High Soft State (SS/HSS): After MJD 58726, the soft fluxes began to increase rapidly again which is quite unusual. An abrupt change has taken place in the accretion process. The hard 15-50 keV flux remained low, and this shows that the change in the accretion process has only affected the fluxes below 10 keV. The soft fluxes increased up to 2019 September 10 (MJD 58736) and then decreased almost linearly until 2019 November 14 (MJD 58801) and became steady at a low level. The HRs also became low, signifying the source had transitioned into a soft state/high soft state. Although XRT and NICER spectra were available for some days at the start of this state, the TCAF fit of these spectra was statistically unacceptable, which shows that the two component configuration of the accretion flow had been violated. We discuss this in detail in 4. After November 2019, spectral data is unavailable for ∼ 4 months, due to the source becoming sun-constrained (Williams et al. 2022). Hence it became impossible to determine how long the source was in the soft state. (5) Declining Hard State (HS): After 2020 February 26 (MJD 58905), Swift/XRT data became available for spectral analysis. The total accretion rate, the ṁ_d and the ṁ_h all were low in this period. On the other hand, the ARR became high, and the shock also moved outward at ∼250r_s with increased strength. All of these show that the source had already transitioned into the declining hard state. §.§.§ Estimation of BH mass from spectral analysis Mass of the BH (M_BH) is an important parameter for spectral fitting with TCAF. Mass of the BH in EXO 1846-031 was previously determined to be 3.24±0.2  M_⊙ based on the presence of 3:2 ratio HFQPOs (Strohmayer & Nicer Observatory Science Working Group 2020). Initially, we tried to fit the spectra with TCAF keeping the mass parameter frozen at this value. But the resulting reduced chi-squares were high and the fits were statistically unacceptable. Hence we keep the mass parameter free during further analysis with TCAF. From our spectral fits, we find the best fitted M_BH values to vary between 7.1-12.6  M_⊙. However, mass of a BH in a BHXRB system is not supposed to change significantly during the course of an outburst. The spread in the mass values obtained from TCAF fits results from random measurement errors, and they do not show the variation of the actual BH mass during the outburst. In our spectral analysis, we fitted the spectra of different energy bands obtained from multiple instruments of different effective areas, which may also contributes to the measurement errors in the mass of the BH. To reduce such errors, we perform a global fit using all spectra in different epochs. We use the model qcrconstant*TBabs*smedge(TCAF+gaussian) and keep the mass parameter linked for all spectra. The joint fit is shown in Fig. 8. From the global fit, we obtain a mass value of the source as 12.43^+0.14_-0.03 M_⊙. § DISCUSSIONS AND CONCLUDING REMARKS EXO 1846-031 is a galactic black hole candidate that went into an outburst in July 2019 after remaining almost 34 years in quiescence after its discovery in 1985. We study the evolution of the temporal and spectral properties of the source during the outburst using observational data from Swift/XRT, Swift/BAT, NICER/XTI, MAXI/GSC and NuSTAR/FPM satellites/instruments. For the spectral analysis we use the physical TCAF model and fit NICER (1–10 keV), combined NICER+GSC (1–20 keV), XRT+GSC (1–20 keV) and NICER+NuSTAR (1–79 keV) spectra for 25 epochs spread over the outburst duration. From our spectral fits, we obtain flow parameters of the system such as the Keplerian disk accretion rate (ṁ_d), the sub-Keplerian halo accretion rate (ṁ_h), the shock location (X_s), and the shock compression ratio (R). As these flow parameters evolve during the outburst, we gain insight into how the accretion flow of matter changes and produces different kinds of spectral and temporal variability. We also estimate the mass of the black hole from our spectral analysis. Generally, transient black hole outbursts show two types of outburst profiles, fast rise exponential decay (FRED) or slow rise slow decay (SRSD) (Debnath et al. 2010). However, in the case of some BHXRBs, the X-ray flux does not decay into quiescence after the first outburst peak. Rather, they show one or more peaks after the main outburst peak before eventually going into the quiescence phase. In literature, such phenomena are known as “reflares”, “rebrightenings” or “mini-outbursts” (e.g. GRO J0422+32, MAXI J1659-152, GRS 1739-278, MAXI J1910-057; Chen et al. 1997, Homan et al. 2013, Yan & Yu 2017, Nath et al. 2023). For the 2019 outburst of EXO 1846-031, we can see from Fig. 1 that both the soft and hard fluxes increase rapidly at the start of the outburst. While the hard flux decayed slowly after attaining its maximum, the soft flux, though it started to decrease initially, began to increase again and attained a maximum comparable with the first peak. This outburst can be classified as a multipeak outburst according to the re-brightening classification scheme of Zhang et al. (2019). As matter gets accumulated at the pile-up radius (X_p; Chakrabarti et al. 2019; Bhowmick et al. 2021; Chatterjee et al. 2022) in the quiescence phase, before an outburst, it is heated up and gets ionized. This ionised matter causes a thermal-viscous instability in the matter. This instability increases the viscosity in the disk causing an increased outward redistribution of angular momentum. This causes the matter to flow rapidly inward onto the BH, triggering the outburst (Lasota 2001; Dubus et al. 2001; Hameury 2020). However, this disk instability model (DIM) cannot explain these re-brightenings/mini-outbursts phenomena very well. Although several models have been proposed that explain the reflares (e.g., Kuulkers et al. 1994; Hameury et al. 2000; Zhang et al. 2019), none of them are well verified. Hence we investigate the rebrightening phenomena of EXO 1846-031 with the TCAF picture. During the 2019 outburst, EXO 1846-031 showed two brighter (in the rising phase) and two dimmer peaks (in the declining phase) in the low energy outburst profile. We fitted the 2-10 keV MAXI/GSC outburst profile with multiple FRED models, and from this fit we estimated that the total integrated flux released in the outburst is 39.70^+3.29_-3.05 Crab. The contribution of individual peaks calculated from the individual FRED profiles are 16%, 78%, 4% and 2% respectively for the first, second, third and fourth peaks. Here we observe that although the peak fluxes are roughly same, five times more energy is released during the second peak than the first peak, and this indicates that most of the matter has been released from the X_p during the second peak. This is quite uncommon in transient BHXRBs. At the start of the outburst, when the viscosity at the pile-up radius increased above the critical value, matter began to rush inward. We can see from Fig. 6 that the halo rate is high compared to the disk rate. As the sub-Keplerian matter has low viscosity, it falls freely towards the BH, whereas the Keplerian matter has large viscosity and it moves inward slower in viscous timescale. The sub-Keplerian matter reaches the BH faster than the Keplerian matter, and the halo rate attains peak value before the disk rate. From Fig. 7, we see that the shock is far away in this initial phase. As there is no Keplerian matter to cool the faster-moving sub-Keplerian matter, it forms a large CENBOL, and this CENBOL inverse Comptonizes most of the soft thermal photons and produces a large number of hard photons. Hence we can see from Fig. 1 that the high energy fluxes dominate the low energy fluxes making the HRs high and the source goes into the rising hard state. After MJD 58697, the Keplerian matter begins to cool down the sub-Keplerian matter as it gradually moves towards the BH. The disk rate starts to increase and the halo rate decreases. The CENBOL shrinks in size, and the shock, which is the outer boundary of CENBOL, moves closer to the BH and decreases in strength. Hence the inverse-Comptonization is reduced, the hard flux begins to decrease, the thermal flux increases, the HRs decrease, and the source goes into the hard intermediate state. As the supply of accreting matter gradually decreases, both the disk rate and halo rate decrease, and the CENBOL shrinks farther and the shock moves very closer to the BH. Both the soft and the hard flux decrease, the HRs are decreased to a very low level and the source goes into a soft intermediate state. In all of these three states, we find the presence of low-frequency quasi-periodic oscillations (LFQPO). In the TCAF picture, LFQPOs are produced due to the oscillation of the shock, i.e. the outer boundary of the CENBOL. The centroid frequency of the LFQPO (ν_QPO) is roughly inversely proportional to the location of the shock (r_s) (ν_QPO∼ 1/r_s(r_s-1)^1/2: Chakrabarti et al. 2008, Debnath et al. 2010). As we can see from Fig. 7, as the shock moves closer to the BH in the HS and HIMS, the centroid frequency of the QPO increases. As the shock becomes almost steady in the SIMS, the QPO frequency also becomes steady. After some days in the SIMS (∼ MJD 58715), the value of the compression ratio becomes close to one and the halo rate becomes close to zero. This signifies that the post-shock and pre-shock matter density is equal, which means that the shock has become very weak or disappeared and it has moved very close to the black hole. As the shock disappears, the sub-Keplerian and Keplerian components of the accretion flow becomes essentially the same. This very weak shock was unable to produce any QPOs, hence the QPO disappear gradually at the later stage of the SIMS. After MJD 58726, the soft fluxes began to increase again while the hard fluxes remained low, which shows that there is an increase in the thermal emission without much of it being inverse-Comptonized. Although some NICER and XRT spectra are available in this phase, we failed to fit these spectra with the TCAF model. These indicate that the two component configuration of the accretion flow is no longer maintained in this period. The sharp increase in the soft fluxes indicates that the supply of the Keplerian matter has increased. This increased supply of Keplerian matter has cooled down the remaining sub-Keplerian matter and only the Keplerian disk accretion is occurring in this state. According to previous studies (Chakrabarti et al. 2019, Bhowmick et al. 2021, Chatterjee et al. 2022), accreting matter supplied from the companion starts to accumulate at the pile-up radius (X_p) during the quiescence phase prior to an outburst, and the longer the duration of the quiescence phase, the more is the matter accumulation. Prior to the outburst in 2019, EXO 1846-031 was in the quiescence phase for a long time (∼ 34 years). This is very similar to the 2003 outburst of the source H 1743-322 when it remained inactive for 25 years before the outburst (Chakrabarti et al. 2019). Similar to the case of H 1743-322, this long period of inactivity of EXO 1846-031 indicates a large amount of matter was accumulated at the X_p before the outburst. This accumulated matter starts to heat up the accretion flow and gives rise to a convective instability which increases the viscosity due to the resulting turbulence. As the viscosity at X_p increased above the critical value, the outburst was triggered. However, the increase in viscosity was not large enough to release all of the accumulated matter from the pile-up radius. As the sub-Keplerian matter moves faster, all of it gets depleted quickly and the source enters the SIMS, which could also be interpreted as the declining state of the first failed (as the soft state is not reached) outburst. At the end of the SIMS, viscosity at the X_p increases again, and the remaining Keplerian matter is released triggering the reflare event. We find an evidence of a broad absorption feature in the SIMS around ∼ 10 keV which we model with qcrgabs with a line energy of 9.71±0.23 keV. This kind of absorption feature could indicate a presence of highly ionised high-velocity winds from the accretion disk (Prabhakar et al. 2023), which in turn indicates that the radiation pressure in the disk is very high. This large amount of radiation irradiates the remaining matter at X_p, and an instability builds up again creating a situation similar to the initial phase before an outburst. This instability again increases the viscosity at X_p and matter starts to accrete again towards the BH. Majority of the sub-Keplerian matter was accreted during the first peak, and this new accretion consists largly of high viscous Keplerian matter. This Keplerian matter interacts with the remaining small amounts of sub-Keplerian matter and cools it down. From Fig. 1, we can see that after attaining the second maximum, the soft flux decreases almost linearly instead of an exponential decline, which is another indication that only the comparatively slow moving Keplerian matter is responsible for this reflare. After ∼ MJD 58800, the source became Sun-constrained and there is no available data for spectral analysis in the period between MJD 58808 and MJD 58904. Hence the exact date when the source came out of the SS cannot be determined. After MJD 58905, spectral analysis shows that the shock has moved outward at ∼ 250 r_s with an increased ARR. This indicates the source has already moved into the declining hard state. The time taken by the high viscous matter to reach the BH from the pile-up radius is termed as the viscous timescale (Chakrabarti et al. 2019). Due to its low viscosity, the sub-Keplerian matter moves toward the BH in freefall timescale, whereas the Keplerian matter takes more time to reach the BH due to its higher viscosity. Due to this reason, it is generally observed that the halo accretion rate attains its peak before the disk rate, and the time difference between disk and halo peaks gives us an idea to infer viscous timescale of the source (Debnath et al. 2015, Jana et al. 2016, 2020b). From Fig. 1, we can see that 15-50 keV Swift/BAT hard flux attains a peak on MJD 58796, and the 2-4 keV MAXI/GSC soft flux attains a peak ∼ 8 days later on MJD 58705. A similar delay between the peaks of halo and disk rates is also found (see Fig. 6). Hence we estimate the viscous timescale of this source to be ∼ 8 days. This large viscous timescale indicates that X_p is far away from the BH and size of the accretion disk is large. The mass of the BH in this source has not yet been measured dynamically, so we try to estimate the mass from our spectral fits. The spectra emitted from the accretion processes around a BH is highly dependent on its mass as various features of the accretion dynamics such as the size of the CENBOL and the electron number density inside it, soft radiation intensity from the Keplerian disk, etc. depend on the mass. We allow the M_BH parameter to vary freely during our spectral analysis so that we get a best fitted value of the parameter from each spectral fit. We find the best fitted values of the parameter to vary in the range 7.1-12.6  M_⊙. This spread in the mass value is a consequence of systematic errors due to the use of data from multiple instruments with different energy range and effective areas. To reduce such errors, we jointly fit all the spectra of different epochs and estimate the most probable mass of the source to be 12.43^+0.14_-0.03 M_⊙. § SUMMARY AND CONCLUSIONS We study the spectral and temporal properties of the source EXO 1846-031 during its 2019 outburst after almost 34 years in quiescence. We use MAXI/GSC and Swift/BAT daily lightcurve data to study the evolution of the X-ray fluxes and hardness ratios during the outburst. We use the FRED profile to fit the outburst flux profile and estimate the contribution of each flux peak in the total amount of flux released during the outburst. We use data from multiple instruments (Swift/XRT, MAXI/GSC, NICER/XTI, NuSTAR/FPM) for a broadband spectral study over a period of 222 days. We perform our spectral study using physical TCAF model. Based on our spectral analysis, the following conclusions can be drawn: * After 34 years in quiescence, EXO 1846-031 showed an outburst in 2019 that can be classified as a multipeak outburst. * Before the start of the outburst, a large amount of matter was accumulated at the pile up radius X_p (located far away from the BH) and all the matter was not accreted during the first outburst peak. * The broad absorption feature around ∼ 9 keV indicates the presence of a fast moving highly ionized disk wind in the rising SIMS. * As the high X-ray flux irradiates the remaining matter at X_p, the viscosity increases and starts a fresh accretion of matter triggering the reflare. * This increased supply of high viscous Keplerian matter in the reflaring event cools down and washes off the sub-Keplerian matter, and only Keplerian disk accretion happens in the HSS. * Although the source showed two brighter and two dimmer peaks during the outburst, ∼ 78% of total energy has been released in the second flaring event. * From spectral fitting with TCAF, probable mass of the source was estimated to be 12.43^+0.14_-0.03 M_⊙. * From the disk and halo peak difference in the rising phase of the outburst, we estimated viscous time scale of the source to be ∼ 8 days. § ACKNOWLEDGEMENTS This work made use of Swift/XRT, Swift/BAT, NICER/XTI, and NuSTAR/FPM data supplied by the UK Swift Science Data Centre at the University of Leicester, and MAXI/GSC data were provided by RIKEN, JAXA, and the MAXI team. S.K.N. acknowledges support from the SVMCM fellowship, the government of West Bengal. S.K.N. and D.D. acknowledge support from the ISRO-sponsored RESPOND project (ISRO/RES/2/418/17-18) fund. D.D. and K.C. acknowledge visiting research grants of National Tsing Hua University, Taiwan (NSTC 111-2811-M-007-066). R.B. acknowledges support from the CSIR-UGC fellowship (June-2018, 527223). H.-K. C. is supported by NSTC of Taiwan under grant 111-2112-M-007-019. 99 Arnaud, K. A. 1996, in ASP Conf. Ser. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes (San Francisco, CA: ASP), 17 Alabarta K., et al., 2020, MNRAS, 497, 3896 Belloni, T., Homan, J., Casella, P., et al. 2005, A&A, 440, 207 Bhowmick, R., Debnath, D., Chatterjee, K., et al., 2021, ApJ, 910, 138 Casella, P., Belloni, T., & Stella, L., 2005, ApJ, 629, 403 Chakrabarti, S. K., & Titarchuk, L. G. 1995, ApJ, 455, 623 Chakrabarti, S. K., 1997, ApJ, 484, 313 Chakrabarti, S. K., Debnath, D., Nandi, A., et al. 2008, A&A, 489, L41 Chakrabarti, S. K. 2013, in Proc. Conf. Ser., Vol. 8, Astron. Soc. of India, ed. S. Das (Assam, India), 1 Chakrabarti, S. K., Mondal, S., & Debnath, D., 2015, MNRAS, 452, 3451 Chakrabarti, S. K., 2018, in Ruffini R., Jantzen R., Bianchi M., eds, Proc. 14th Marcel Grossman meeting. Study of Accretion processes Around Black Holes becomes Science: Tell Tale Observational Signatures of Two Component Advective Flows. World Scientific Press, Singapore Chakrabarti, S. K., Debnath, D., & Nagarkoti, S. 2019, AdSpR, 63, 3749 Chatterjee, D., Debnath, D., Chakrabarti, S. K., Mondal, S., Jana, A., 2016, ApJ, 827, 88 Chatterjee, D., Debnath, D., Jana, A., Chakrabarti, S. K., 2019, AP&SS, 364, 14 Chatterjee, K., Debnath, D., et al. 2021, Ap&SS, 366, 63 Chatterjee, K., Debnath, D., Bhowmick, R, Nath, S. K., & Chatterjee, D., 2022, MNRAS, 510, 1128 Chen, W., Shrader, C. R., & Livio, M. 1997, ApJ, 491, 312 Debnath, D., Chakrabarti, S. K., Nandi, A., Mandal, S., 2008, Bull. Astron. Soc. India, 36, 151 Debnath, D., Chakrabarti, S. K., & Nandi, A. 2010, A&A, 520, 98 Debnath, D., Mondal, S., & Chakrabarti, S. K. 2014, MNRAS, 440, L121 Debnath, D., Mondal, S., & Chakrabarti, S. K. 2015, MNRAS, 447, 1984 Debnath, D., Jana, A., Chakrabarti, S. K., & Chatterjee, D. 2017, ApJ, 850, 92 Debnath, D., Chatterjee, K., Chatterjee, D., et al. 2021, MNRAS, 504, 4242 Draghis, P. A., Miller, J. M., Cackett, E. M., et al. 2020, ApJ, 900, 78 Dubus, G., Hameury, J.-M., & Lasota, J.-P. 2001, A&A, 373, 251 Ebisawa, K., Titarchuk, L. G., & Chakrabarti, S. K. 1996, PASJ, 48, 59 Fender, R. P., Belloni, T., Gallo, E. 2004, MNRAS, 355, 1105 Gendreau, K. C., Arzoumanian, Z., & Okajima, T. 2012, Proc. SPIE, 8443, 844313 Hameury, J.-M., Lasota, J.-P., Warner, B., 2000, A&A, 353, 244 Hameury, J. M., 2020, Advances in Space Research, 66, 1004 Homan, J., Belloni, T., 2005, AP&SS, 300, 107 Homan, J., Fridriksson, J. K., Jonker, P. G., et al. 2013, ApJ, 775, 9 Jana, A., Debnath, D., Chakrabarti, S. K., Mondal, S., Molla, A. A., 2016, ApJ, 819, 107 Jana, A., Debnath, D., Chatterjee, D., et al. 2020a, RAA, 20, 28 Jana, A., Debnath, D., Chatterjee, D., et al. 2020b, ApJ, 897, 3 Kocevski, D., Ryde, F., & Liang, E., 2003, ApJ, 596, 389 Kuulkers, E., van der Klis, M., Oosterbroek, T., Asai, K., Dotani, T., van Paradijs, J., Lewin, W. H. G., 1994, A&A, 289, 795 Lasota, J. P. 2001, NewAR, 45, 449 Liu, H.-X., Huang, Y., Xiao, G.-C., et al. 2021, RAA, 21, 070 Matsuoka, M., Kawasaki, K., Ueno, S., et al., 2009, PASJ, 61, 999 McClintock J. E., Remillard R. A., 2006, in Lewin W., van der Klis M., eds, Cambridge, Astrophysical Series 39: Compact Stellar X-ray Sources. Cambridge Univ. Press, Cambridge, p. 157 Miller, J. M., et al., 2018, ApJ, 860, L28 Molla, A. A., Chakrabarti, S. K., Debnath, D., Mondal, S., 2017, ApJ, 834, 88 Mondal, S., Chakrabarti, S. K., & Debnath, D., 2015, ApJ, 798, 57 Mondal, S., Chakrabarti, S. K., Debnath, D., 2016, Ap&SS, 361, 309 Nath, S. K., Debnath, D., Chatterjee, K., Jana, A., Chatterjee, D., & Bhowmick, R., 2023, AdSpR, 71(1), 1045 Negoro, H., Nakajima, M., Sugita, S., et al. 2019, ATel, 12968, 1 Novikov, I. D., & Thorne, K. S. 1973, in Black Holes (Les astres occlus), ed. C. DeWitt & B. DeWitt (New York: Gordon and Breach), 343 Parmar, A. N., & White, N. E. 1985, IAUC, 4051, 1 Parmar, A. N., Angelini, L., Roche, P., & White, N. E. 1993, A&A, 279, 179 Prabhakar, G., Mandal, S., Bhuvana, G. R., Nandi, A., 2023, MNRAS, 520, 4889 Remillard, R. A., Loewenstein, M., Steiner, J. F., et al. 2022, AJ, 163, 130 Ren, X. Q., Wang, Y., Zhang, S. N., et al. 2022, ApJ, 932, 66 Shakura, N. I., & Sunyaev, R. A. 1973, A&A, 24, 337 Strohmayer T. E., Nicer Observatory Science Working Group 2020, in American Astronomical Society Meeting Abstracts 235. p. 159.02 Sunyaev, R. A., & Titarchuk, L. G. 1980, ApJ, 86, 121 Tanaka, Y., & Shibazaki, N. 1996, ARA&A, 34, 607 Tetarenko, B. E., Sivakoff, G. R., Heinke, C. O., Gladstone, J. C., 2016, ApJS, 222, 15 Verner, D. A., Ferland, G. J., Korista, K. T., & Yakovlev, D. G. 1996, ApJ, 465, 487 Wang, Y., Ji, L., García, J. A., et al. 2021, ApJ, 906, 11 Williams, D. R. A., Motta, S. E., Fender, R., Miller-Jones, J. C. A., et al. 2022, MNRAS, 517, 2801 Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914 Yan, Z., & Yu, W. 2017, MNRAS, 470, 4298 Zhang, G.-B., et al. 2019, ApJ, 876, 5
http://arxiv.org/abs/2307.05858v1
20230712004234
Quantum-Enhanced Metrology for Molecular Symmetry Violation using Decoherence-Free Subspaces
[ "Chi Zhang", "Phelan Yu", "Arian Jadbabaie", "Nicholas R. Hutzler" ]
physics.atom-ph
[ "physics.atom-ph", "quant-ph" ]
[][email protected] California Institute of Technology, Division of Physics, Mathematics, and Astronomy. Pasadena, CA 91125 We propose a method to measure time-reversal symmetry violation in molecules that overcomes the standard quantum limit while leveraging decoherence-free subspaces to mitigate sensitivity to classical noise. The protocol does not require an external electric field, and the entangled states have no first-order sensitivity to static electromagnetic fields as they involve superpositions with zero average lab-frame projection of spins and dipoles. This protocol can be applied with trapped neutral or ionic species, and can be implemented using methods which have been demonstrated experimentally. Quantum-Enhanced Metrology for Molecular Symmetry Violation using Decoherence-Free Subspaces Nicholas R. Hutzler August 12, 2023 ============================================================================================= Precision measurements of time-reversal (T) symmetry violation in molecular systems provide stringent tests of new physics beyond the Standard Model <cit.>. For example, searches for the electron's electric dipole moment (eEDM) have excluded a broad parameter space of T violating leptonic physics at energy scales up to ∼ 50 TeV <cit.>. Experiments aiming to laser cool and trap eEDM-sensitive neutral molecules <cit.> are currently under construction and promise significantly improved measurement precision. The immediate impact of cooling and trapping is the substantially longer coherence time compared to beam experiments, a result of both long trapping time and easier field control for quasi-stationary molecules confined in a small volume. Furthermore, quantum metrology techniques <cit.>, such as entanglement and squeezing, promise routes to additional enhancement of eEDM sensitivity. However, a specific scheme providing metrological gain without added susceptibility to classical noise from electromagnetic fields has, to our knowledge, not yet been conceived. Additionally, contemporary eEDM searches with molecular ions are conducted in non-stationary rotating traps <cit.>, since an external electric field is used to polarize the molecules. Although various improvements will be implemented for near-future experiments <cit.>, molecule motion in the rotating trap during spin precession remains a challenge for implementing entanglement-enhanced metrology. In this manuscript, we show that the eEDM can be observed as a coupling between two entangled molecules within a decoherence-free subspace. The eEDM sensitivity scales linearly with the entangled molecule number, thereby offering Heisenberg-limited sensitivity beyond the standard quantum limit, while the susceptibility to electromagnetic fields remains mitigated. In addition, the two molecules do not have to be aligned in the lab frame by an external electric field; instead, they are prepared in orthogonal superpositions of opposite parity states. As a result, the scheme is applicable to neutral molecules in optical lattices or tweezer arrays <cit.> as well as molecular ions in quasi-stationary traps <cit.>, which enable entanglement generation and are a well-established platform for precision measurement <cit.>. Importantly, the entangled molecular states involved are experimentally achievable using existing entanglement protocols <cit.>, some of which have been demonstrated recently <cit.>, together with single molecule operations <cit.>. Our discussion here focuses on the eEDM as an example, but the method can be straightforwardly extended to measure other T violating moments, including the nuclear Schiff moment <cit.> and nuclear magnetic quadrupole moment <cit.>. The energy shift of the eEDM (d_e) in an effective internal molecular electric field () is d_e ·. The internal field points along the molecule axis (n̂) and its amplitude is determined by the electronic structure of the molecule, while the eEDM is collinear with the total electron spin (S). Conventional eEDM experiments <cit.> orient the molecule axis in the lab frame by mixing opposite parity states with an external electric field, and subsequently polarize the electron spin in the lab frame as well. The eEDM interaction then manifests as a small spin-dependent energy shift, measured by performing spin precession in the polarized molecules. However, the polarized molecular dipoles and electron spins also make these experiments sensitive to uncontrolled external fields. As a consequence, the most common quantum metrology methods, such as spin squeezing <cit.>, increase sensitivity to external electromagnetic fields by the same amount as the gain in eEDM sensitivity. The resulting increased susceptibility to decoherence and systematic errors from these fields, which are a main concern for eEDM experiments, can counteract the eEDM sensitivity boost. Here we instead probe the eEDM as a coupling between two opposite-parity states in a molecule. We first consider the effects of this coupling in a single molecule to build understanding of the system, and then discuss how we can engineer entangled states in a two (or more) molecule system which have Heisenberg-limited sensitivity (∝ N) to the eEDM but without concurrent increases in collective electric or magnetic field sensitivity. Again, we consider the eEDM as it provides the simplest possible system, but the methods are applicable to symmetry violating nuclear moments as well. In Fig. <ref>, we provide an example of a single molecule in the parity-doubled bending mode of a ^2Σ triatomic molecule <cit.>, though the method should be generalizable to other types of parity-doubled states. The opposite-parity states are labeled as |0⟩ and |1⟩, and the spin states in the lab basis are labeled by |↑⟩ and |↓⟩. The eEDM causes a spin-dependent coupling between |0⟩↔|1⟩ with a coupling strength ε_CPV = ⟨0_↑| d_eΣ|1_↑⟩ = 2 d_e Σ_0, where Σ = S·n̂ is the projection of spin on the molecule axis and Σ_0 is the expectation value of Σ when averaged over other angular momentum quantum numbers of the molecule wavefunction <cit.>. The coupling changes sign to -ε_CPV for the time-reversed state |↓⟩. In a superposition state such as 1/2(|0⟩+|1⟩)(|↑⟩ + e^iθ|↓⟩), which corresponds to an orientation of perpendicular to the electron spin, the eEDM interaction causes spin precession that changes the phase θ of the spin superposition. Note that this is conceptually similar to the usual idea of creating a superposition of |0⟩,|1⟩ by polarizing the molecule with a static external electric field. However, here we consider creating a superposition of these states without static applied fields, meaning that the orientation of the molecular dipole, and therefore , will be oscillating in the lab frame at a frequency given by the parity splitting ω_𝒫 (typically ∼ 2π×100 kHz to ∼ 2π×100 MHz) between |0⟩ and |1⟩ <cit.>. Thus, the eEDM spin precession (≲ 100 μ Hz) can only accumulate phase in the frame rotating at ω_𝒫; in the lab frame, the direction of spin precession oscillates rapidly and averages to zero, so there is no eEDM-induced energy shift or spin precession. However, with two (or more) molecules, we can engineer states where eEDM precession does not average to zero, yet the oscillation in the lab frame makes the molecules highly insensitive to external fields. Furthermore, we shall see that these states have a metrological gain in sensitivity due to entanglement. We denote the superpositions 1/√(2)(|0⟩+e^iω_𝒫t|1⟩)=|⇑⟩ and 1/√(2)(|0⟩-e^iω_𝒫t|1⟩)=|⇓⟩, suggestive of the fact that these states have opposite orientation of the (rotating) molecular dipole. Consider two molecules in the state |⇑⇓⟩, as shown in Fig. <ref>, where we label rotating frame spin states using |↑⟩ and |↓⟩. The rotation of the frame is described as H_rot = ħω_𝒫σ_x in the rotating frame basis <cit.> (also see Supplemental Material). An eEDM shifts |↑↓⟩ and |↓↑⟩ oppositely, as they have opposite relative orientations of electron spins and molecular dipoles. Therefore, an eEDM couples the degenerate singlet and triplet pair states with zero total spin projections. These states constitute a decoherence-free subspace as the molecular electric and magnetic dipole moments have zero average projection on the laboratory fields and are therefore insensitive to them to first order. This is conceptually similar to the eEDM coupling in a hyperfine clock transition <cit.>. Similar to the single molecule case, the eEDM has little effect on the eigenstates of H_rot. However, now we can switch on and off the eEDM spin precession by applying a radio-frequency (rf) magnetic field B in phase with the rotating frame (this is challenging for a single molecule; see Supplemental Material). The rf magnetic field is described by H_B = Ω_B σ_z, with Ω_B the interaction strength (Ω_B ≈μ_B B for ^2Σ_1/2 electronic states), and it shifts |↑↑⟩ and |↓↓⟩ oppositely, as they have different orientations relative to the rf field. The couplings of H_rot, H_B, and eEDM are shown in Fig. <ref>(c) in the Bell state basis (|Ψ^±⟩ = 1/√(2) (|↑↓⟩±|↓↑⟩), |Φ^±⟩ = 1/√(2) (|↑↑⟩±|↓↓⟩)). H_rot and H_B couple |Ψ^+⟩↔|Φ^-⟩ and |Φ^-⟩↔|Φ^+⟩, respectively. The resulting eigenstates are shown in Fig. <ref>(d); the middle state, whose eigenenergy is not shifted, is |u⟩ = sinθ|Ψ^+⟩ - cosθ|Φ^+⟩, with the mixing angle θ given by tanθ = Ω_B/ω_𝒫. Note that these interactions do not couple to |Ψ^-⟩. However, the eEDM interaction couples |Ψ^-⟩↔|Ψ^+⟩ but with coupling strength much smaller than H_B or H_rot. The eEDM therefore induces a resonant coupling |Ψ^-⟩↔|u⟩ with a reduced coupling strength of ε_u = 4ε_CPVΩ_B/√(Ω_B^2 + ω_𝒫^2), which reaches ∼ 90% of the maximum (4ε_CPV) when Ω_B ≳ 2ω_𝒫. Note that this is twice the coupling of a fully polarized single molecule, thereby beating the standard quantum limit. A static magnetic field, or more generally, a magnetic field at a different frequency, causes the phase on the |Ψ^+⟩ part of |u⟩ to oscillate and thus the eEDM coupling averages to zero. Consequently, the eEDM spin precession is turned on only when the magnetic field is in-phase. The eEDM spin-precession subspace is also known as a decoherence-free subspace <cit.>; it is robust to noise since the total spin and dipole projections, and therefore the expectation of electric and magnetic dipole moments, is zero. The experimental sequence for two molecules, as an example, is illustrated in Fig. <ref>. Molecules are initialized in |0_↓ 0_↓⟩ by optical pumping. Then the spins are entangled in |Ψ^-⟩_lab – this can be realized by direct dipole-dipole <cit.> or Rydberg atom mediated interactions <cit.>, or, for trapped ions, the spin-dependent force gate <cit.> or the Mølmer-Sørenson interaction <cit.>. For more than two molecules, the entangled singlet state can be generated by a set of gate operations, adiabatic sweeping to the ground state of the many-body system <cit.>, or extracting from cluster states <cit.>. Subsequently, the molecule orientation is prepared in |⇑⇓⟩. This can be done in two sub-steps: first drive a global π/2-pulse between |0⟩↔|1⟩, and then apply an AC-Stark shift using a far-detuned laser focused on one of the molecules and imprint a π phase on |1⟩. By addressing different molecules, or by changing the detuning of the laser, the direction of eEDM spin precession can be controlled, thus providing “switches” to observe the eEDM <cit.>. Note that for multiple pairs of molecules trapped simultaneously, this could be performed in parallel across different pairs to mitigate imperfections in the laser pulses. The initial spin state |Ψ^-⟩_lab is invariant under rotations and thus is equal to |Ψ^-⟩ in the rotating frame. Next, when the rf magnetic field is turned on, eEDM spin precession |Ψ^-⟩↔|u⟩ starts in the rotating frame. After eEDM spin precession, the magnetic field is turned off and then the orientation of the molecules is rotated back to |00⟩. In the lab frame, |u⟩ is an rf-dressed state, which is oscillating in the triplet subspace of {|Ψ^+⟩_lab, |Φ^-⟩_lab, |Φ^+⟩_lab}. After turning off the magnetic field, the population in |u⟩ is distributed in the triplet subspace but mostly mapped to |Ψ^+⟩_lab. Finally, the eEDM spin precession phase, i.e. the phase between |↑↓⟩ and |↓↑⟩ components, is measured by a projection measurement in the 1/√(2) (|↑↓⟩± i |↓↑⟩) basis by the parity oscillation measurement <cit.> as described further in the Supplemental Material. Our scheme has many advantages. First, the spin precession rate in the entangled basis is two times faster than in a fully polarized single molecule, and it scales linearly with molecule number for the anti-ferromagnetic spin states (i.e. between 1/√(2) (|↑↓↑ ... ↑↓⟩ + |↓↑↓ ... ↓↑⟩) ↔1/√(2) (|↑↓↑ ... ↑↓⟩ - |↓↑↓ ... ↓↑⟩) for molecule orientation |⇑⇓⇑ ... ⇑⇓⟩), thus realizing a metrological gain from entanglement. More importantly, the eEDM spin precession subspace is decoupled from various environmental noise sources, including magnetic fields, vector and tensor light shifts, etc., since the total spin and dipole projections are zero and the spin precession takes place in a rotating frame where slow noise is averaged out. This is unlike conventional eEDM protocols using polarized molecules in the lab frame, where the eEDM-enhanced entangled states, such as squeezed states or the GHZ state 1/√(2) (|↑↑ ... ↑⟩ + |↓↓ ... ↓⟩), normally require spins aligned collectively in the lab frame and thus are also increasingly sensitive to magnetic field noise, AC Stark shifts, etc. Magnetic field gradients at the same frequency may cause spin precession in the same subspace; however, this effect can be disentangled from an eEDM by switching the sign of eEDM interaction, which is controlled by the phase of the rf magnetic field and the phase of the molecule orientation. For example, the spin precession directions in |⇑⇓⟩ and |⇓⇑⟩ are opposite, and the spin does not precess in |⇑⇑⟩ or |⇓⇓⟩. Other couplings, including H_rot and H_B, are insensitive to the ± phase between |0⟩ and |1⟩. Furthermore, our scheme is robust to various experimental imperfections. For example, the fidelity of entanglement generation does not have a lower threshold; the population that is not initialized in |Ψ^-⟩ is not coupled by the eEDM and only contributes a constant background. Many possible sources may cause imperfect initialization of the molecule orientation; they include, for instance, fluctuations in the π/2-pulse power, Stark shifts, imperfect single molecule addressing light shift, or small difference in the g-factors of |0⟩ and |1⟩ states (resulting from perturbations of other electronic states), etc. If a molecule is not in equal superposition of |0⟩ and |1⟩ the eEDM interaction (Σ_0) is slightly reduced. If two molecules are not in exact opposite phases of |0⟩ and |1⟩ superpositions, the splitting between |↑↓⟩ and |↓↑⟩ is reduced (this can be used as a switch to tune the spin precession rate). If two molecules have different |0⟩ and |1⟩ populations, their eEDM interactions (Σ_0) are different and thus |Ψ^-⟩ is also coupled to the |Φ^±⟩ states. However, this additional coupling does not cause spin precession since the |Φ^±⟩ states are strongly coupled by the magnetic field (see Fig. <ref>[c]). Importantly, all the fields are applied independently and they do not have correlation with the eEDM switch (AC Stark shift from the addressing beam). As a consequence, these imperfections do not lead to systematic effects directly, but instead to contrast reduction and increased statistical noise. Magnetic field correlated rf electric fields, stray electric fields, and black-body radiation (BBR) have detrimental effects on the state of molecule orientation and need to be shielded. Our scheme does not require a DC electric field, and shielding electric fields is straightforward, especially without the need for electric field plates nearby. The effects of the residual fields include near-resonant couplings between |0⟩↔|1⟩ and off-resonant effects, such as energy shifts on |0⟩ and |1⟩. The coupling effect is suppressed by the dipole-dipole interaction between two molecules when the residual-field coupling strength is weaker than the dipole-dipole interaction (typically ∼kHz at ∼μ m separation), and it can also be mitigated by applying a stronger electric field in phase with the molecule oscillation. Stray electric fields or off-resonance BBR can cause an energy shift between |0⟩ and |1⟩. This alters the oscillating frequency of the rotating molecules, which may affect coherent control of the molecule orientation and may interfere with the eEDM spin precession by shifting the oscillation out of phase with the magnetic rf field. Nevertheless, stray electric fields can be actively measured and cancelled, especially since the molecules needed for this protocol will be trapped in a small volume ∼mm^3; for example, in trapped ions a residual electric field lower than 0.1 mV/cm has been achieved <cit.>. A 0.1 mV/cm fluctuation corresponds to a maximum ∼ 50 mHz dephasing rate for a molecule of d_0 ≈ 2 D dipole moment and ω_𝒫≈ 100 kHz parity splitting. This leads to a coherence time of ∼ 10 s, and the coherence time is inversely proportional to parity splitting. On the other hand, we need Ω_B ≈μ_B g B ≳ 2ω_𝒫, where g is the electron magnetic g-factor. To avoid using high magnetic field (a few Gauss, using a similar magnetic field coil setup in ref. <cit.>), our scheme is most suitable for molecules with ω_𝒫≲ 10 MHz, which is a typical range for parity doubling. In addition, for trapped ions, ω_𝒫 needs to be much lower than the trap rf frequency (∼ 20 MHz). Some examples of suitable neutral and ion species are listed in the Supplemental Material. In summary, we have presented a quantum metrology scheme to probe T-violating effects in molecular systems. The Heisenberg scaling is particularly important for the future experiments where the molecules are well-controlled but do not necessarily have large molecule numbers, such as molecules in tweezer arrays and ion traps, as well as rare radioactive molecules <cit.>. The T-violating interaction causes spin precession in an entangled, decoherence-free subspace in a rotating frame, where the slow noise in the lab frame is averaged out, and the molecules do not need to be polarized by an external electric field. As a result, the scheme is compatible with stationary ion traps, such as the linear Paul trap, in which a powerful toolbox of precision spectroscopy and quantum metrology has been developed, including sympathetic cooling <cit.>, quantum logic spectroscopy <cit.>, ion shuttling <cit.>, micromotion compensation <cit.>, entanglement generation, etc. Furthermore, the direction of spin precession is controlled by the phase of the applied magnetic rf field and the phase of the oscillation of the molecule orientation. In T-violation measurements, systematic effects normally arise from imperfections correlated with the switch of the sign of the T-violating interaction, such as parity state or external electric field. Our eEDM switch is an AC-Stark shift by the far-detuned addressing beam on one of the molecules, which has little correlation with other imperfections, and can be performed in parallel across multiple pairs of molecules. In addition, because of the magnetic field insensitivity, this scheme will also improve the coherence in a shot-noise limited measurement using magnetic molecules, including all laser coolable neutral molecules and certain T-sensitive molecular ions whose ground states are magnetic. These advantages will significantly improve the precision of T-violating new physics searches in the near future. We acknowledge helpful discussions with Andreas Elben, Manuel Endres, Ran Finkelstein, Andrew Jayich, Dietrich Leibfried, Christopher Pattison, John Preskill, Tim Steimle, Yuiki Takahashi, Michael Tarbutt, Fabian Wolf, Xing Wu and the PolyEDM Collaboration. This work was supported by Gordon and Betty Moore Foundation Award GBMF7947, Alfred P. Sloan Foundation Award G-2019-12502, and NSF CAREER Award PHY-1847550. C.Z. acknowledges support from the David and Ellen Lee Postdoctoral Fellowship at Caltech. P.Y. acknowledges support from the Eddleman Graduate Fellowship through the Institute for Quantum Information and Matter (IQIM).
http://arxiv.org/abs/2307.05035v1
20230711061925
Number Systems for Deep Neural Network Architectures: A Survey
[ "Ghada Alsuhli", "Vasileios Sakellariou", "Hani Saleh", "Mahmoud Al-Qutayri", "Baker Mohammad", "Thanos Stouraitis" ]
cs.NE
[ "cs.NE", "cs.AR", "cs.LG" ]
1]Ghada Alsuhli 1]Vasileios Sakellariou 1]Hani Saleh, Senior Member, IEEE, 1]Mahmoud Al-Qutayri, Senior Member, IEEE, 1]Baker Mohammad, Senior Member, IEEE, 1]Thanos Stouraitis [1]Department of Electrical Engineering and Computer Science, System on Chip Center, Khalifa University Number Systems for Deep Neural Network Architectures: A Survey [ August 12, 2023 ============================================================== Deep neural networks (DNNs) have become an enabling component for a myriad of artificial intelligence applications. DNNs have shown sometimes superior performance, even compared to humans, in cases such as self-driving, health applications, etc. Because of their computational complexity, deploying DNNs in resource-constrained devices still faces many challenges related to computing complexity, energy efficiency, latency, and cost. To this end, several research directions are being pursued by both academia and industry to accelerate and efficiently implement DNNs. One important direction is determining the appropriate data representation for the massive amount of data involved in DNN processing. Using conventional number systems has been found to be sub-optimal for DNNs. Alternatively, a great body of research focuses on exploring suitable number systems. This article aims to provide a comprehensive survey and discussion about alternative number systems for more efficient representations of DNN data. Various number systems (conventional/unconventional) exploited for DNNs are discussed. The impact of these number systems on the performance and hardware design of DNNs is considered. In addition, this paper highlights the challenges associated with each number system and various solutions that are proposed for addressing them. The reader will be able to understand the importance of an efficient number system for DNN, learn about the widely used number systems for DNN, understand the trade-offs between various number systems, and consider various design aspects that affect the impact of number systems on DNN performance. In addition, the recent trends and related research opportunities will be highlighted. Number Systems, Artificial Intelligence Accelerators, Deep neural networks, floating point, fixed point, logarithmic number system, residue number system, block floating point number system, dynamic fixed point Number System, Posit Number System. § INTRODUCTION During the past decade, Deep Neural Networks (DNNs) have shown outstanding performance in a myriad of Artificial Intelligence (AI) applications. Since their success in speech recognition <cit.> and image recognition <cit.>, great attention has been drawn to DNNs from academia and industry <cit.>. Although DNNs are inspired by the deep hierarchical structures of the human brain, they have exceeded the human accuracy in a number of domains <cit.>. Nowadays, the contribution of DNNs is notable in many fields including self-driving cars <cit.>, speech recognition <cit.>, computer vision <cit.>, natural language processing <cit.>, and medical applications <cit.>. This DNN revolution is helped by the massive accumulation of data and the rapid growth in computing power <cit.>. Because of their high computational complexity and memory space requirements, general-purpose compute engines (like powerful central processing units (CPUs) and Graphics Processing Units (GPUs)), or customized hardware (e.g., using FPGAs or ASICs) have been used to accelerate DNN processing <cit.>. While general-purpose compute engines remain dominant for processing DNNs within academia, the industrial applications of DNNs often require implementation on resource-constrained edge devices ( e.g., smartphones or wearable devices) <cit.>. Whether DNNs are run on GPUs or dedicated accelerators, speeding up and/or increasing DNN hardware efficiency without sacrificing their accuracy continues to be a demanding task. The literature includes a large number of survey papers that have been dedicated to highlighting the directions that can be followed to reach these goals <cit.>. Some examples of these directions are DNN model compression <cit.>, quantization <cit.>, and DNN efficient processing <cit.>. One of the directions that have a great impact on the performance of DNNs, but has not been comprehensively surveyed yet is the DNN number representation. As the compute engines use a limited number of bits to represent values, real numbers cannot be infinitely represented. The mapping between a real number and the bits that represent it is called number representation <cit.>. Generally speaking, number representation has a great impact on the performance of both general-purpose and customized compute engines. Recalling the huge amount of data that need to be processed in the context of DNNs, the choice of the format used to represent these data is a key factor in determining the precision of DNN data, storage requirements, memory communication, and arithmetic hardware implementation <cit.>. This in turn shapes different metrics of the DNN architecture performance; mainly the accuracy, power consumption, throughput, latency, and cost <cit.>. To this end, there is a significant body of literature that has focused on assessing the suitability of specific number systems for DNNs, modifying conventional number systems to fit DNN workloads, or proposing new number systems tailored for DNNs. Some of the leading companies, such as Google <cit.>, NVIDIA <cit.>, Microsoft <cit.>, IBM <cit.>, and Intel <cit.>, have contributed in advancing the research in this field. A comprehensive survey of these works will be helpful to furthering the research in this field. While conventional number systems like Floating Point (FLP) and Fixed Point (FXP) representations are frequently used for DNN engines, several unconventional number systems are found to be more efficient for DNN implementation. Such alternative number systems are presented in this survey and they are the Logarithmic Number System (LNS), Residue Number System (RNS), Block Floating Point Number System (BFP), Dynamic Fixed Point Number System (DFXP), and Posit Number System (PNS). Figure <ref> shows the bit visualization of conventional and unconventional number systems used in DNN implementation. The structure of the survey is summarized as follows. * Section <ref> gives an overview of conventional number systems and their utilization for DNNs. * Section <ref> classifies the DNNs that adopt the logarithmic number system. * Section <ref> describes the concepts behind the residue number system and its employment for DNNs. * Section <ref> describes the block floating point representation and the efforts done to make it suitable for DNNs implementation. * Section <ref> discusses the dynamic fixed point format and the work done to calibrate the parameters associated with this format. * Section <ref> explains various DNN architectures that utilize Posits and the advantages and disadvantages associated with these architectures. * Section <ref> provides an insight into recent trends and research opportunities in the field of DNN number systems. § CONVENTIONAL NUMBER SYSTEMS FOR DNN ARCHITECTURES The two conventional number systems, mainly the floating point and the fixed point, are the common choice for almost all general-purpose DNN engines. While the FLP representation is usually used for modern computation platforms (e.g., CPUs and GPUs), where high precision is a must, FXP is more common in low-cost computation platforms that are used in applications that demand high speed, low power consumption, and small chip area. In this section, these two representations are introduced and their utilization for implementing DNN hardware is briefly discussed, in order to facilitate the comparison between conventional and unconventional number systems. §.§ FLP for DNN Architectures In the FLP number system, a number n is represented using a sign (1 bit), an exponent e (unsigned integer of length es) and a mantissa m (unsigned integer of length ms) (Figure <ref>) and its value is given by n =(-1)^s × 2^e-e_max× (1+m/2^ms), where e_max=2^es-1-1 is a bias used to ease the representation of both negative and Positive exponents. Although there are several FLP formats <cit.>, the IEEE 754 FLP format <cit.> is the most common representation used by modern computing platforms <cit.>. According to IEEE 754, the FLP can be of single, double, or quad-precision depending on the used bit-widths (e.g., for the single-precision FLP the bit-width is 32 bits and es=8). The single-precision FLP, also called FLP32, is commonly used as a baseline to evaluate the efficiency of other number representations. Unless otherwise stated, the performance degradation or enhancement is presented in comparison to the FLP32 format in this survey as well. Multiplication of two FLP numbers is implemented in hardware by adding their exponents, multiplying the mantissas, normalizing the resultant mantissa, and adjusting the exponent of the product <cit.>. FLP addition involves comparing the operand exponents, shifting their mantissas (if the exponents are different), adding the mantissas, normalizing the sum mantissa, and adjusting the sum exponent <cit.>. Usually, the increased complexity of the FLP32 arithmetic requires using a separate unit called Floating Point Unit (FPU) to perform the FLP calculations <cit.>. The high power consumption and cost of this unit limits its usage within embedded processing units such as FPGAs <cit.>. Consequently, the standard FLP32 is rarely used for building efficient DNN architectures <cit.>. To increase the efficiency of the FLP in DNN architectures several custom FLP formats <cit.> have been proposed. Also new designs of the FLP arithmetic hardware (mainly the multiplier) have been investigated <cit.>. The 32-bit FLP representation has a wide dynamic range, beyond what is usually required for DNNs <cit.>, resulting in a low information-per-bit metric, which means an unnecessary increase in power consumption, area, and delay. For this reason, the proposed custom FLP representations mainly have reduced bit-width and a different allocation of the bits to mantissa and exponent, than IEEE 754. The bit-width is reduced to 19 bits in Nvidia’s TensorFloat32 <cit.> and 16 bits in Google’s Brain FLP (bfloat16) <cit.> formats used in DNN training engines. 8-bit FLP has been proposed to target the DNN inference in <cit.>. These reduced FLP formats proved their efficiency in replacing FLP32 with comparable accuracy, higher throughput, and smaller hardware footprint. It is worth noting that most of these custom FLP formats are used to represent data stored in memory (i.e., weights and activations), whereas, for internal calculations (e.g., accumulation and weight updates), FLP32 is used instead to avoid accuracy degradation <cit.>. In summary, the standard FLP representation has a massive dynamic range, which makes it a good choice for computationally intensive algorithms that include a wide range of values and require high precision. At the same time, the complex and power-hungry FLP calculations make FLP less attractive for DNN accelerators. This leads to using narrower custom FLP formats which require less hardware and memory footprint while preserving the performance of the standard FLP32. However, the utilization of the FLP format for DNN accelerators is relatively limited and it loses ground to fixed point and other alternative representations. §.§ FXP for DNN Architectures The power inefficiency of the FLP arithmetic is the main motivation to replace it with the FXP format for designing energy-constrained DNN accelerators. A real number n is represented in FXP with the sign, the integer, and the fraction parts. The fixed point format is usually indicated by <I,F> where I and F correspond to the number of bits allocated to the integer and the fractional parts, respectively. In this paper, we use the notations FXP8, for example, to denote the FXP representation with bit-width equal to 8, i.e., I+F+1=8. In FXP format, the separation between the integer and the fractional parts is implicit and usually done by specifying a scaling factor that is common for all data. Thus, the FXP number can be treated as an integer and, hence, integer arithmetic is used. Integer arithmetic requires substantially fewer logic gates to be implemented and consumes much less chip area and power, compared to FLP arithmetic. This makes FXP attractive to be used for DNN accelerators on the edge. Moreover, the FXP allows for more reduction in the number of bits resulting in a significant reduction in the power consumption, storage requirements, and memory bandwidth <cit.>. On the other hand, the dynamic range[The dynamic range of a number system is the ratio of the largest value that can be represented with this system to the smallest one.] of data represented by low precision FXP is limited. This makes FXP suitable to represent data with only narrow range of values. Since this is not the case for most DNNs, using low precision FXP for DNNs is challenging. To enable this, various approaches were adopted such as quantization <cit.>. For instance, uniform quantization includes scaling weights and activations of DNN and mapping them to a restricted range of values. These values can be represented by low-bit-width FXP. This allows lowering the number of bits to be less than 8 bits <cit.>, and even as low as 2 bits (i.e., ternary DNNs <cit.>) or 1 bit (i.e., binary DNNs <cit.>). For more information about the FXP quantization, precision reduction, and binary DNNs the interested reader is referred to <cit.>. In short, the FXP for DNN implementation offers great hardware efficiency at the expense of some accuracy degradation. Between the two extreme representations (FLP and FXP), there are several number systems that offer different trade-offs (Pareto optimal points) between the hardware efficiency and the acquired accuracy. These number systems and their usage for DNN implementation are presented in subsequent sections of this paper. § LNS FOR DNN ARCHITECTURES Proposals for LNS first emerged in the 1970s to implement the arithmetic operations of digital signal processing. The utilization of LNS for neural computing was first proposed in the late 90's <cit.>. Since then, using LNS to implement efficient hardware for DNN has become more popular. The main benefit of using LNS is in simplifying the implementation of the costly arithmetic operations required for DNN inference and/or training <cit.>. In addition, representing the data in LNS enables a reduction of the number of bits required to obtain the same DNN accuracy as with conventional number systems <cit.>. In LNS, a real number n is represented with a logarithm of radix a of its absolute value (ñ=log_a(|n|)) and a sign bit s_n[Some works use additional dedicated bit z_n to indicate when n equals zero <cit.>, while others use a special code to represent zero <cit.>.] <cit.>. The number ñ is represented using two's complement fixed point format <cit.>, as shown in Figure <ref>. The radix a of the logarithm is usually selected to be 2 for simpler hardware implementation. Throughout this survey, we will use a=2 as well. The main DNN operation that can be dramatically simplified using LNS is the multiplication by transforming it into linear (i.e., fixed point) addition. The LNS product p̃ of two real numbers n_1, and n_2 is calculated as follows p̃ =ñ_̃1̃ ⊙ñ_̃2̃, =log_2(|n_1| ×|n_2|), =ñ_̃1̃ + ñ_̃2̃, s_p̃=s_n_1 XOR s_n_2, where ⊙ is the multiplication operation in LNS domain that can be implemented with a simple integer adder, and s_p̃ is the product sign, which is calculated by XORing the signs (s_n_1 and s_n_2) of the two operands. Existing proposals for LNS-based DNNs are for either using LNS for the whole DNN architecture from end-to-end, just for using the LNS-based multipliers, or for using logarithmic quantization for DNN weights and/or layer inputs. Based on this classification, LNS-based DNN architectures are discussed next by highlighting the challenges associated with each architecture and the solutions presented in the related work. §.§ End-to-end LNS-based DNN Architectures End-to-end-LNS implementation utilizes the LNS for all blocks of the architecture, and thus, no conversion from or to conventional systems takes place. For this, the inputs (i.e., the dataset) and the weights[When the architecture targets DNN inference.] are assumed to be fed to DNN in LNS format. This task is usually performed offline and has no overhead on the implemented architecture. In this section, we review LNS-domain implementation of the main operations that are needed for DNN training and inference. The two types of DNNs that were implemented using LNS from end to end are convolutional neural networks (CNNs) <cit.> and recurrent neural networks (RNNs) <cit.>. These two types of DNN have different architectures, but they share the same basic operations which are multiplication, addition, and activation functions. Since the multiplication operation becomes a linear addition in LNS-domain, the challenging part of this architecture is implementing LNS-addition and LNS-activation functions, which are discussed next. §.§.§ Addition in LNS As opposed to multiplication, performing addition in LNS is not straightforward. Let ñ_̃1̃ and ñ_̃2̃ be the two operands to be added in LNS. This LNS addition ⊕ is usually defined as follows s̃ũm̃ =ñ_̃1̃ ⊕ñ_̃2̃, =log_2|(-1)^s_n_1×2^ñ_̃1̃ +(-1)^s_n_2×2^ñ_̃2̃|, where s̃ũm̃ is the LNS domain summation of the two operands, and s_n_1 and s_n_2 are their signs. As these operands can be negative or Positive, s̃ũm̃ is derived from <ref> <cit.> such that s̃ũm̃= {[ max(ñ_̃1̃,ñ_̃2̃)+log_2(1+2^-|ñ_̃1̃-ñ_̃2̃|),; if s_n_1 =s_n_2,; max(ñ_̃1̃,ñ_̃2̃)+log_2(1-2^-|ñ_̃1̃-ñ_̃2̃|),; if s_n_1≠ s_n_2, ]. s_s̃ũm̃= {[ s_n_1, n_1>n_2,; s_n_2, n_1 ≤ n_2, ]. where s_s̃ũm̃ is the sign of the summation. To reduce the computational complexity of calculating s̃ũm̃, the term Δ_±= log_2(1±2^-|ñ_̃1̃-ñ_̃2̃|) is approximated using look-up-tables (LUTs) <cit.> or reduced to be implemented via bit-shifts <cit.>. The LUT approximation requires using LUT of size r_max/r, where r_max is the range of values stored in the LUT and r is the resolution of the stored values. For the bit-shift implementation, the approximation in (<ref>) is utilized to replace the calculation of log_2(1±2^-|ñ_̃1̃-ñ_̃2̃|) by a simple shift operation illustrated in (<ref>). log_2 (1+x)=x, for 0<x<1, log_2(1±2^-|ñ_̃1̃-ñ_̃2̃|)= ±𝐁𝐒(1,-|ñ_̃1̃-ñ_̃2̃|), where 𝐁𝐒(b,d)=b × 2^d means to shift the bits of b binary representation by |d| Positions, to the left if d is negative and to the right otherwise. Figure <ref> shows how these two approximations are almost equivalent, However, LUTs require circuits with larger silicon and add extra delays to the system <cit.>. §.§.§ Activation Functions in LNS Some activation functions can be transformed directly to the LNS domain by using the LNS operations to implement them. For example, the Leaky-ReLU for a number n in linear domain, shown in (<ref>), is simply represented in LNS domain as in (<ref>) <cit.>. LReLU(n|α)= {[ α n, n<0,; n , n≥0, ]. L̃R̃ẽL̃Ũ( (ñ, s_n)|α)= {[ (ñ+α, s_a=s_n), s_n=1,; (ñ, s_a=s_n), s_n=0, ]. where α is a constant, s_a is the sign after applying the activation, and ñ and s_n are the logarithm and the sign of n. However, for more complicated activation functions, such as Sigmoid, tanh, and Softmax, more efficient hardware is obtained if these functions are approximated with piece-wise approximation that can be implemented using combinational logic <cit.>. The motivation of this is that the approximation becomes an additional source of non-linearity and places a low burden on the performance of the implemented DNN architecture. This is particularly the case if these functions are used within the training process which is inherently noisy <cit.>. As an example, equation (<ref>) shows the LNS domain piece-wise approximation of tanh activation function in (<ref>) <cit.>. tanh(n)= 1-e^-n/1+e^-n t̃ãñh̃(ñ, s_n)≈{[ (0, s_a=s_n), ñ> 0,; (ñ, s_a=s_n), -10<ñ≤ 0,; (0, s_a=1), ñ<-10,; (0, s_a=1), ñ=0, z_n=1, ]. §.§.§ Summary and Discussion of End-to-End LNS-based DNN architectures When LNS is used to represent DNN data from end to end, all operations needed to perform DNN training and / or inference must be implemented in the LNS domain. Multiplication is implemented using FXP addition, while other operations, such as addition and activation functions, need to be approximated. The presented approximation techniques that have been proposed for DNN implementation introduce insignificant loss in the performance of the implemented architectures. The classification accuracy degradation is found to be less than 1% for the studied end to end LNS-based CNN architectures <cit.>. These work do not investigate the associated impact on the CNN hardware efficiency. However, an idea about the hardware efficiency is offered by LNS implementation of long short-term memory (LSTM) architecture where the area is saved by 36% for a 9-bit design <cit.>, while the area savings decrease when the number of bits increased, due to the LUTs required for the LNS addition approximation. §.§ LNS Multiplier-based DNN architectures Since end-to-end LNS implementations of DNNs introduces complexity for implementing additions and activation functions, an alternative approach is to limit using the LNS to the multipliers. The focus here is to design an efficient LNS-based Multiplier that receives linear operands and produces a linear product as well. §.§.§ LNS-based Multiplier To simplify the discussion and comparisons between various LNS multipliers proposed for DNN, some notations are introduced for the next subsections. Let n be a Positive integer and its w-bit binary representation is B_n= b_w-1 b_w-2… b_0. Let b_k be the most significant '1' in B_n (k is called the characteristic number of n). The linear number n and its logarithm can be represented by n=2^k (1+x), log_2 (n)=k+log_2 (1+x), where 0≤ x < 1 is called the mantissa of n. Let n_1=2^k_1 (1+x_1) and n_2=2^k_2 (1+x_2) be the multiplier and multiplicand, respectively. The product of these operands and its logarithm are given by n_1 × n_2=2^k_1+k_2 (1+x_1) (1+x_2), log_2(n_1 × n_2)=k_1+k_2+log_2 (1+x_1)+log_2 (1+x_2), The main idea of the logarithmic multiplier is to use a specific approximation based on the characteristics of the logarithms to simplify the product calculation by mainly using shift and add operations instead of hardware-intensive conventional multiplication. Given their effectiveness, many logarithmic multipliers have been proposed for image processing and neural computing <cit.>. Several of these multipliers were utilized to also build efficient DNN architectures. They are classified in this survey into multipliers that use Mitchell's approximation, iterative logarithmic multipliers, double-sided error multipliers, and multipliers with explicit logarithm and antilogarithm modules. Mitchell's Multiplier According to Mitchell's algorithm <cit.>, the logarithm of a number n is approximated with piece-wise straight lines as in (<ref>). Thus, the logarithm of the product in (<ref>) is approximated by the sum of the characteristic numbers and the mantissas of the operands as follows log_2(n_1 × n_2)≈ k_1+k_2+x_1+x_2. The final product is obtained in (<ref>) by applying the antilogarithm on (<ref>) using the approximation in (<ref>). Then, the product of two integers is calculated using add and shift operations, as n_1 × n_2 ≈{[ 2^k_1+k_2 (1+x_1+x_2), x_1+x_2 < 1; 2^k_1+k_2+1 (x_1+x_2), x_1+x_2 ≥ 1 ]. Even though the error introduced by Mitchell's approximation is relatively high (up to 11% <cit.>), this multiplier showed no accuracy degradation for CNN architecture with 32- bit precision <cit.>, while being 26.8% more power-efficient compared to conventional multipliers of the same number of bits. To gain additional power efficiency over the one achieved by Mitchell's multiplier, a truncated-operand approach has been proposed <cit.>. Instead of using the whole operands, these operands are truncated and only their ω most significant bits are used to calculate the approximated product. For instance, selecting ω=8 allows for a more efficient multiplier that saves up to 88% and 56% of power when compared to an exact 32-bit FXP multiplier and a Mitchell's multiplier, respectively. The additional error introduced by this truncation caused an accuracy degradation of 0.2% for the ImageNet dataset. The significant power saving associated with the negligible performance degradation of this approach comes from the fact that the most significant part of the operand can be sufficient to provide an acceptable approximation <cit.>. Iterative Logarithmic Multiplier The iterative logarithmic multiplier aims to reduce the error introduced by the approximation in (<ref>) by adding correction terms. The calculation of these terms usually requires iterative multiplications that can be calculated in the same way as calculating the approximate product ( see Figure <ref>). These correction terms can be biased (always Positive) or unbiased (negative/Positive). The product of two numbers in (<ref>) can be written using biased correction terms <cit.> as n_1 ×n_2 =P_approx+ E, where P_approx =2^k_1+k_2(1+x_1+x_2) is an approximate product that can be calculated using shift and add operations. E=2^k_1+k_2 x_1 x_2 is a correction term that is ignored in (<ref>). Estimating the term E requires calculating the product (2^k_1 x_1) (2^k_2 x_2) = (n_1-2^k1) (n_2-2^k2) iteratively, in the same way of calculating P_approx. Then, n_1 ×n_2 =P_approx^(0) + E^(0), =P_approx^(0) + P_approx^(1)+E^(1), =P_approx^(0) + P_approx^(1)+…+P_approx^(i-1)+E^(i-1), where i is the number of iterations and E^(i-1) is the error to be ignored after the i^th iteration. Notice that when i equals the number of bits that have the value of '1' in the operands, then E^(i-1) = 0, and the exact product is produced. For each iteration, the new operands to be multiplied are obtained by removing the leading 'ones' from the original operands. For this reason, the correction terms can be calculated in parallel using one additional circuit for each iteration. Hence, there is a trade-off between the accuracy of the multiplication and the area and power overhead due to adding these correction circuits. For example, this iterative logarithmic multiplier with one iteration (i.e., one correction circuit) was able to save 10% on area and 20% on power consumption without any notable impact on the learning accuracy when it was used to implement the hardware of a relatively simple neural network and compared with the case of using floating point multiplier <cit.>. On the other hand, using unbiased iterative correction terms of (<ref>) shows a better area and power reduction by up to 44.6% and 48.1%, respectively, compared to the multiplier designed with the error terms of (<ref>) <cit.>. E = {[ ((1-x_1)2^k_1-1) ((1-x_2)2^k_2-1),; if x_1+x_2+2^-k_1,2≥ 1; (x_1 2^k_1) (x_2 2^k_2),; if x_1+x_2+2^-k_1,2< 1, ]. where k_1,2=max(k_1, k_2). Using 32-bit multipliers based on the correction terms in (<ref>) and (<ref>) gives classification accuracy comparable to that of default floating point multiplier for various CNN architectures <cit.>. Double-sided Error Multiplier The logarithmic approximation in (<ref>) underestimates the logarithm value and results in an always negative error. Since tolerating and even having better performance in the presence of noise is an important feature of DNNs, creating a multiplier with "double-sided error" can enhance the implemented architecture of DNN in terms of accuracy and hardware efficiency <cit.>. To achieve this, another logarithmic approximation may be utilized <cit.>. In addition to the expression in (<ref>), any integer number n can be represented as n=2^k+1 (1-y), where 0≤ y < 1. The two representations in (<ref>) and (<ref>) can be used to come up with a new logarithmic approximation with double-sided error <cit.>, as log_2 (n) ≈{[ k+x, n-2^k <2^k+1-n.; k+1-y, otherwise. ]. To utilize this approximation, the two multiplied operands, n_1,and n_2, are transformed into the closest powers of two plus an additional negative or Positive term (a_1, a_2, respectively). Hence their product can be calculated as n_1 ×n_2 =(2^k_1+a_1) ×(2^k_2+a_2), =2^k_1+k_2+a_1 2^k_2+a_2 2^k_1+a_1a_2. The product is approximated to be the sum of the first three terms in (<ref>), whereas the last term (a_1a_2) is omitted as an approximation error. In fact, this approximation has a larger absolute error compared to Mitchell's approximation (of (<ref>)). This can be observed as well from Figure <ref>. However, the signed errors help with canceling the error and having higher classification accuracy by up to 1.4% compared to the case of using a conventional exact multiplier to implement CNNs <cit.>. This comes in addition to the better hardware efficiency indicated by 21.85% of power savings. Explicit Logarithm-Antilogarithm Multiplier For the aforementioned multipliers (i.e., Mitchell's, iterative,.. ) there is no explicit module for logarithm and antilogarithm calculation. The implementation of these operations is not done explicitly, but their characteristics are used to transform the costly multiplication into simpler operations. On the other side, the logarithmic multiplier can be designed by explicitly transforming the operands into the logarithmic domain, adding the operands, and finally returning back to the linear domain. As calculating the exact logarithm is very costly, the logarithm/antilogarithm operations are usually approximated using LUTs <cit.> or bit-level manipulation <cit.>. One approach that uses LUT-based approximated log/antilog multipliers for CNN is presented in <cit.>. Let n, represented in (<ref>), be an operand in the linear domain. The logarithm of this operand is approximated as log_2 (n) =k+log_2 (1+x), ≈k+Q_β(log_2 (Q_γ(1+x))), where Q_γ is the quantization used to represent (1+x) with γ bits in the linear domain, and β bits are guaranteed for the approximated logarithm in the log domain using Q_β. The mapping from 1+x into log_2 (1+x) is obtained using a LUT of size 2^γβ bits. If the product in LNS domain is represented by p̃= m.f, where f ∈ [0, 1) is the fraction part and m is the integer one, the approximated antilog of this product is calculated by log_2^-1(p̃) =2^m 2^f, ≈2^m (1+Q_α(2^f-1)), i.e., the term 2^m is implemented by a bit shift, whereas 2^f is approximated using a LUT. The LUT maps f to Q_α(2^f-1), where Q_α is the applied quantization to limit the number of bits to α. LUT-approximation is usually used to escape from the errors introduced by the approximation in (<ref>). However, to keep the size of the required LUTs reasonable, the value of α, γ, and β should be kept as small as possible. This introduces a loss in accuracy. In addition, this approach is expected to be less hardware-friendly because of the needed overhead to implement these LUTs. Nevertheless, experimental results showed that integrating 16-bit LUT-based multiplier with a wide FXP accumulator results in a reduction in power consumption and area by up to 59% and 68%, respectively, in comparison to 16-bit FLP multiplier <cit.>. This comes in addition to achieving a negligible accuracy degradation (<1%) for the CNN ResNet50 network trained on the ImageNet dataset. Another approach for approximating log/antilog modules is using bit-level manipulation to innovate area and speed efficient logarithm or antilogarithm operations <cit.>. Among these works, the two-region manipulation-based logarithm converter and bit correction-based antilogarithm converter <cit.> are used to implement an LNS multiplier that is exploited to build an efficient CNN accelerator design from area and delay point of view <cit.>. When this design is compared to conventional multiplier implementation, it saves up to 60% of the area-delay product. However, neither the accuracy of the CNN nor a comparison with other logarithmic multipliers has been reported for this design. Summary and Discussion of LNS-based Multipliers LNS-based multipliers use the characteristics of the logarithm to transform the multiplication into simpler operations. Most of the proposed logarithmic multipliers for DNN architecture started from Mitchell's approximation to innovate their logarithmic multipliers. Table <ref> compares various architectures that use LNS multipliers. We notice that these multipliers are used to implement efficient CNN architectures suitable for DNN inference rather than training. In addition, this table depicts that using the vanilla Mitchell's multiplier offers power-efficient hardware with comparable accuracy to the FLP32 multiplier of the same number of bits. However, reducing the number of bits requires a more accurate approximation with less average error than Mitchell's. When the LNS multiplier is designed with the characteristics of DNN in mind, such as the double-sided-error multiplier, the outcome is further reduction in the number of bits with significant power savings, while preserving and even enhancing the classification accuracy of the reported CNNs. §.§ Logarithmic quantization for DNN architectures Logarithmic quantization involves representing a real number n with a sign and an integer exponent (integer power of two). The integer is usually an approximation of the logarithm log_2|n| of the real number after applying the clipping and rounding <cit.>. Logarithmic quantization has been employed in order to achieve efficient hardware implementation of CNNs <cit.>. The main idea behind this is the that the multiplication by this integer exponent can be easily implemented in hardware by bit shifting. In CNNs, both the convolutional and fully-connected layers include matrix multiplication, i.e., dot product between the weights w of each layer and the input activation x, which is the output of the previous layer after applying the non-linearity (e.g., ReLU). This matrix multiplication is usually performed using a number of multiply-and-accumulate operations when the conventional data representation is used to implement digital hardware, as shown in Figure <ref>(a). However, this dot product can be implemented more efficiently when Logarithmic quantization is utilized. Due to the non-uniform distribution of the weights and inputs, using nonuniform quantization, such as logarithmic quantization, is preferred over the uniform quantization, such as when FXP is used <cit.>. Existing CNN architectures that use logarithmic quantization assume that weights and/or the inputs of the layer are quantized. When logarithmic quantization is applied to inputs only (i.e activations), Figure <ref>(b), or to weights only, Figure <ref>(c), the dot product becomes a simple bit shift operation followed by an accumulation. applying logarithmic quantization on the weights only shows insignificant accuracy degradation <cit.> and significant power and area savings <cit.>, see Table <ref>. Quantizing the inputs only in LNS results in the same performance from an accuracy point of view <cit.>, however, with an additional linear-to-LNS module to be added. This module is responsible for transforming output activations to LNS before storing them in memory. This scheme has the advantage of requiring a smaller memory bandwidth as the stored activation is represented in LNS <cit.>. The works that apply logarithmic quantization to weights as well as to activations usually use a logarithm radix different from '2' <cit.>. Then, the multiplication becomes an addition of the LNS quantized weights and activations followed by an approximation to decode this sum into the linear domain before implementing the accumulation. This add-decode-accumulate scheme adds a complication to hardware implementation, however, with comparable accuracy to the aforementioned logarithmic quantization schemes, as illustrated in Table <ref>. § RNS FOR DNN ARCHITECTURES The Residue Number System (RNS) can be an attractive choice for DNN accelerators due to its arithmetic properties. In this Section, a brief overview of the RNS is given, and several RNS-based architectures for AI applications reported in literature are presented. Architectures are classified to partially RNS-based, where intermediate conversions to conventional representations between successive layers are used, and end-to-end RNS-based architectures, where the entire processing takes place in the RNS domain. The typical computation flow of these two types of systems is shown in Fig.  <ref>. The number representation scheme utilized in realizing DNN architectures directly impacts the accuracy, speed, area, and energy dissipation. Modern Deep Learning models keep growing in depth and number of parameters and require a huge amount of elementary arithmetic operations, the majority of which are multiply-add operations (MAC). In the Residue Number System, each number is represented as a tuple of residues with respect to a modulus set {m_1,m_2,…,m_n}, which is called the base of the representation. The dynamic range of the representation is given by R = ∏_i=1^N m_i . If the moduli are co-prime, i.e., _1≤ i,j≤ N i≠ j (m_i,m_j ) = 1, where (·) denotes the greatest common divisor operation, each integer inside the range [0, R) has a unique RNS representation X ↦(x_1,x_2, …, x_n) , x_i = ⟨ X ⟩_m_i, where ⟨·⟩ _m is the modulo-m operator. Inverse transformation is generally harder and can be realized by means of the Mixed Radix Conversion or the Chinese Remainder Theorem <cit.>. §.§ RNS Addition and Multiplication Due to the properties of the modulo operation, addition and multiplication can be done independently and in parallel for each residue channel, i.e., without inter-channel propagation of information. Suppose A = (a_1,a_2,…,a_n) and B = (b_1,b_2,…,b_n), then a ⊕ b = ( ⟨ a_1 ⊕ b_1⟩_m_1, ⟨ a_2 ⊕ b_2⟩_m_2 , … , ⟨ a_n ⊕ b_n⟩_m_n ), where ⊕ can be either the addition or the multiplication operator. This property is what makes RNS very efficient in applications that require a large number of these operations, such as DSP applications and, more recently, Neural Network inference. This is because, by decomposing the computations into independent channels, long carry propagation chains are eliminated, thus arithmetic circuits can operate at higher frequencies, or with reduced power dissipation. The general architecture of a modulo adder is shown in Fig <ref> <cit.>. The design consists of an , adder, where n is the size of the channel, that performs the addition of two numbers a + b, and a CSA adder which performs the computation of a + b - m_i (modulo operation). The sign of the CSA result is used to select the correct result of the two adders. The selection of moduli can significantly simplify the design of modulo arithmetic circuits. In case of moduli of the form 2^k the modulo operation translates into just keeping the k least significant bits, whereas in the case of 2^k-1, the output carry of the addition simply needs to be added to the result. In this case, end-around-carry adders can be used. For channels of the form 2^k+1, diminished-1 arithmetic can be used <cit.>, which basically involves an inverted end-around logic. If the size of the channel is large, then fast adder designs such as prefix adders must be utilized within each channel. Modulo multiplication is a trickier operation, however the benefits of RNS can be greater. This is because of the (approximately) quadratic scaling of a multiplier with the input size. This means that, by decomposing a large multiplication into smaller ones, the energy and delay savings can be significant, providing that the overhead of the modulo is diminished. One approach for RNS multiplication is to perform regular multiplication of the two n-bit numbers and then use a reduction circuit to obtain the final result modulo m_i. This approach introduces, however, considerable overhead to the design, as the reduction of a 2n-bit number to a n-bit number modulo m_i is not as straightforward as in the case of addition. A low complexity adder-based combinatorial multiplier has been proposed in <cit.>, where the number of FAs required is minimized. Other multiplication techniques are based on intermediate RNS transformations, such as core functions <cit.> and isomorphisms <cit.>, which are transformations that convert multiplication into addition. These transformations utilize look-up tables to convert RNS to an intermediate representation where multiplication is translated into addition. In the case of modulo 2^k multiplication, regular multipliers operating only on the k LSBs can be used, whereas in the case of modulo-(2^k + 1) diminished-1 arithmetic can be applied <cit.>. A end-around-carry multiplier which can be used for 2^k-1 channels is shown in Fig. <ref>. Due to the properties of the particular channel, the modulo operation is translated into simple bit re-ordering, thus no overhead is introduced. Based on the above, most of the RNS designs reported in literature utilize thes low-cost forms of moduli, which allow to fully exploit the RNS benefits (elimination of long carry chains) with minimal hardware overhead. §.§.§ Conversions and Non-trivial Operations While addition and multiplication are very efficiently implemented in RNS, other operations such as sign detection, comparison and division, or the realization of non-linear activation functions are not straightforward to implement, as the require the combination of the RNS channels. A common approach is to use RNS-to-binary converters and then perform the operation in the binary domain. Conversion to and from an RNS representation is a crucial for the performance of any RNS-based processing system. Especially for the architectures that perform frequent intermediate conversions (partially RNS-based) the overhead can be significant. The complexity of these converters largely depends on the particular base selection, namely the size, number and format of the moduli. While Binary-to-RNS or forward converters can have a relative simple hardware realization, following Eq. <ref>, especially if particular forms of moduli are used, RNS-to-binary or inverse converters are generally harder to implement. Extensive bibliography exists for this topic. The most commonly used approaches are the Chinese Remainder Theorem (CRT) and the Mixed Radix Conversion (MRC) <cit.>. The CRT is expressed as X = ⟨ (∑_i=1^n m_i ⟨ x_im_i^-1⟩ _m_i)⟩_M where ⟨·⟩ denotes the modulo operation, X is the binary representation of the number, x_i are its residues, m_i are the moduli, M is the dynamic range, m_i = M/m_i, and m_i^-1 is the modulo inverse of m_i. CRT requires the pre-computation of m_i, and m_i^-1, additions of potentially large products, as well as the final modulo operation with the M, which can be very large. It can be computed, however, in a single cycle. In the other hand, MRC requires the computations of some intermediate coefficients and is a sequential process which requires several steps, but these steps only include small bit-width operations. The Mixed Radix Conversion finds the coefficients k_1, k_2, …, k_n, such that X = k_1 + k_2m_1 + k_3m_1m_2 + … + k_nm_1m_2… m_n-1 The coefficients are calculated one by one in a number of steps <cit.>, each of which requires the previously calculated coefficients. The modulo inverses can be pre-calculated and pipelining stages can be introduced to make this computation efficient. Sign detection is one of the most critical and frequent operation required by NNs, as the Rectified Linear Unit (ReLU), which maps negative values to zero, is the most common activation function. In an RNS representation, numbers in the range 0 < X <R/2 are positive, whereas numbers in the range R/2 ≤ x < M are negative. Magnitude comparison of two RNS numbers which is required for the MaxPooling layers, is also difficult to directly to implement in the RNS domain. Comparison algorithms for particular moduli sets (2^k-1,2^k,2^k+1)  <cit.>, or more complex general ones have been proposed <cit.>, that can eliminate the overhead of the conversion. If the choice of moduli is restricted to some specific bases, simple and efficient algorithms have been reported for sign detection <cit.> and comparison. Finally, division, which is necessary after the multiplication and accumulation operations of a convolutional layer for example, in order to bring the result in the original dynamic range, also requires special handling. Methods that use special form of moduli, such as powers of two <cit.> or a product of the moduli <cit.> as divisors can simplify the hardware implementation. Some methods rely on using small (only one-channel wide) lookup-tables and typically relay on base extension methods, during which an RNS base with k channels is extended to k+r channels. §.§ Partially RNS-based Architectures A common approach in RNS-based DNN implementations is to perform all multiply-add operations of a single convolutional or dense layer in the RNS representation and then use a converter to obtain a partial result in normal positional binary representation <cit.>. With this intermediate result, the non-linear activation functions (ReLU, tanh, softmax) can be computed and the results can be again converted to RNS format to be fed to the next layer. Many application-specific AI accelerator designs, as well as more general purpose architectures, such as TPUs or GPUs, perform DNN computations by decomposing them into matrix or vector multiplication primitives. Thus, by utilizing efficient hardware matrix multipliers, performance can be orders of magnitude better than CPUs. An RNS TPU (Tensor Processing Unit) is proposed in <cit.>. In the core of this architecture there is a RNS matrix multiplier implemented as a two dimensional systolic array. Each processing element performs one operation (MAC) at each cycle, and passes the result to neighboring processing elements. Systolic arrays are an efficient way of increasing throughput and dealing with the limited memory bandwidth problem. In this particular RNS systolic array, each processing element decomposes the larger MAC operation (typically 8 or 16 bits), into smaller, each within the range of the respective channel, that can be performed in parallel. Using an FPGA implementation the RNS matrix multiplier is reported to perform a 32×32 fixed point matrix multiplication up to 9× more efficiently than a binary matrix multiplier for large matrices. In <cit.> the authors extend the RNS usage to the implementation of the convolution operation. Individual layers are executed on an RNS-based FPGA accelerator. However results are sent to a CPU, which performs the non-trivial RNS operations, such as applying the activation functions, before being sent back to the FPGA for the execution of the next layer. RNS results in a reduction of the hardware costs of a single convolutional layer compared to the two’s-complement implementation, depending on the RNS base selection. A variant of the Residue Number System, called the Nested RNS (NRNS) is proposed in <cit.>. NRNS applies a recursive decomposition of the residue channels into smaller ones. Adder and multipliers can be thus implemented by using smaller and faster circuits. Assuming that a number X has a RNS representation of (x_1, x_2,…, x_n), then the nested RNS representation will be of the form X = (x_1,x_2,…,(x_i1,x_i2,…,x_im),…,x_n) where (x_i1,x_i2,…,x_im) is the RNS decomposition of the i-th channel This technique introduces an additional complexity, as any operation must be recursively applied to each level of the representation, however it manages to handle large dynamic ranges with very small channels. The authors use a 48-bit equivalent dynamic range composed only of 4-bit MAC units which can be realized by look-up tables of the FPGA. Contrary to <cit.>, which relies on an external CPU, in this work binary-to NRNS and NRNS-to-binary conversions are realized by DSP blocks and on-chip BRAMs. After Input data are converted into the NRNS representation, a number of parallel convolutional units perform all the necessary computations of a single convolutional layer. The results are then converted to binary using a tree-based NRNS-to-binary converter. The authors report a performance per area improvement of 5.86× compared to state-of-the-art FPGA implementations for the ImageNet benchmark. In a different approach the RNS arithmetic costs are reduced by restricting the RNS base selection to low-cost moduli of the form 2^k± 1 <cit.>. This way, modified fast prefix adders and CSA trees using end-around-carry propagation can be used, diminishing any overhead of the modulo operator. In another category of RNS-based architectures, the usage of very small channels allows the realization of multiplier-free CNN architectures. The authors utilize a small RNS base of (3,4,5) and reduce the implementation of the multiplications to shifts and additions <cit.>. Despite the reduced dynamic range of the representation, the authors report minimal accuracy loss, while achieving 36% and 23% reduction in power and area, respectively. A method to drastically reduce the number of multiplications in CNN RNS-bases accelerators is proposed in <cit.>. It utilizes a modified hardware mapping of the convolution algorithm where the order of operations is rearranged. Because of the small dynamic range of each RNS channel, there is an increased number of common factors inside the weight kernels during convolution. By first executing the additions of the input feature map terms that correspond to the same factors, and then performing the multiplications with the common weight factors, a 97%, reduction of the total multiplications is reported for state-of-the-art CNN models. §.§ End-to-end RNS Architectures While the above circuits mange to achieve some performance gain in the implementation of a single convolutional layer, they require significant amounts of extra hardware to perform the conversions which can become the bottleneck for some of these designs. More recent approaches focus on overcoming the difficulties of performing operations such as sign detection, comparison, and scaling which is usually required following multiplication. In these approaches, input data are initially converted to an RNS representation and then the entire processing takes place in the RNS domain. §.§.§ State-of-the-art End-to-end RNS Architectures The system in <cit.> introduces some novel mechanisms for dealing with this problem and proposes an efficient fully RNS-based architecture. The authors of this work choose to work with moduli of the form 2^k-1, 2^k, 2^k+1-1. In particular they select (31,32,63) as the basis of their representation, as it is found to provide a sufficient dynamic range (16-bit equivalent), that results in no accuracy loss, for state-of-the-art networks and benchmarks. For the design of the modulo adders, which are simplified due to the particular selection of the moduli, parallel-prefix Sklansky adders with an end around carry are utilized. For the multiplications, a radix-4 Booth encoding is adopted within each channel. An optimized sign detection unit for this set of moduli is used, based on an approach proposed in <cit.>. which can be further transformed and result in a relatively hardware-friendly implementation. Using a similar logic to the work proposed in <cit.>, the comparison of two RNS numbers can also be implemented by calculating auxiliary partitioning functions. The authors also introduce a base extension mechanism which is necessary in order to avoid potential overflow when accumulating the partial sums. In this work, a base extension method proposed in <cit.> is used, where the middle channel is extended from 2^k to 2^k+e. This way the convenient properties of the chosen moduli are maintained. Base extension takes places once before each multiplication to ensure that the product lies within the dynamic range and then again before the accumulation. The authors define the number of extra bits that are added each time based on extensive simulation on benchmark networks and on a per-layer basis. RNS circuits result in significant delay and energy efficiency improvement, especially in the case of multiplication at the cost of larger overall area. Comparisons in terms of various performance metrics against the Eyeriss <cit.> accelerator are reported for various networks. Up to 61% reduction in energy consumption compared to the conventional positional binary representation has been achieved. The system can also support an increased clock frequency as high as 1.20 GHz versus 667 MHz in the case of the positional binary system, indicating a 1.8× improvement in computational latency. §.§.§ In-Memory Computing RNS Architectures Recently, there has been a growing focus of AI accelerator design research on in-memory computing. This is because of the paradigm-shifting effect that emerging memory technologies can have on processing systems. It is known that the largest part of the energy consumption of any DNN accelerator is due to the memory accesses and data transferring, particularly to and from the off-chip RAM. In-memory computing (IMC) aims to diminish data transfer costs by bringing the computing inside or near the memory elements. Efforts have been made to bring the benefits of the RNS to IMC systems. In these (mainly digital) IMC designs, the benefit of the usage of RNS over binary representation stems from the speedup of the bitwise serial addition operations, due to the inherent parallel operations of the RNS channels. RNS has been utilized in the design of an in-memory computing system <cit.>. In this work, the selected moduli are of the form 2^k-1, 2^k, 2^k+1. A sign detection mechanism similar to <cit.>, is developed, in order to implement the ReLU and MaxPooling operations without having to convert to a binary representation. Addition and multiplication within each RNS channel, take place inside the memory elements. Multiplication of two numbers, a,b is implemented through addition and memory accesses by calculating the quantity (a+b)^2/4 - (a-b)^2/4, where squaring is implemented using look-up-tables. A single crossbar memory is assigned to each neuron and supports in-memory addition in a tree-based structure. For this purpose, a Memristor Aided loGIC (MAGIC) is used. Based on experimental results, the proposed RNS in-memory architecture consumes 145.5× less energy and leads to a speedup of 35.4 × compared to NVIDIA GPU GTX 1080. An near-memory RNS-based processing architecture is proposed in <cit.>. Instead of memristor-based memory macros, a DRAM computational sub-array is utilized for the implementation of the MAC operations in the RNS domain, combined with parallel-prefix adders, to implement bitwise multiplication and accumulation. Unlike <cit.>, where multiplication is directly implemented in memory (by mapping to additions and squaring), here they are implemented by combining elementary bit-wise operations (AND, OR, XOR) between the operands. The authors also design a more flexible activation function unit which is based on a Mixed-Radix conversion. Similar to <cit.>, an RNS base of (2^k-1, 2^k, 2^k+1-1) is utilized. Gains in the order of 331-897× in terms of energy efficiency compared to GPU platforms are reported, and 2× compared to other IMC designs. §.§ Summary of RNS-based DNN architectures RNS-based architectures targeting DNN applications are summarized in Table <ref>. The majority of these approaches utilize low cost moduli of the form 2^k-1, 2^k, 2^k+1 to reduce the overhead of the modulo operator and are targeting CNNs. Most of these RNS accelerators can achieve speedups in the order of 1.5-3× and can also me more energy efficient. IMC RNS-based systems exhibit the largest energy savings. Among conventional systems, <cit.> illustrates more clearly the applicability of the RNS in DNN architectures by proposing a fully RNS system which outperforms the binary state-of-the-art counterpart. The RNS usage is also extended to LSTM networks, by designing hardware friendly RNS activation units for the implementation of tanh and sigmoid functions <cit.>. In conclusion, the Residue Number System (RNS) can be an attractive number representation choice for DNN accelerators, and several RNS-based architectures have been reported recently targeting AI applications, due to its various advantages. RNS exhibits inherent parallelism at the residue channel processing level. It utilizes parallel computations along separate residue channels, where operations in each of them are performed modulo a specific modulus, with no need for information (carry or other) to be shared between residue channels. The main challenge in designing an efficient RNS-based accelerator is to minimize or, possibly, eliminate the overhead introduced due to the implementation of the non-linear operations. Another key factor is the optimization of the moduli selection and the corresponding arithmetic circuits, to meet the accuracy requirements. Some of the RNS systems proposed in recent literature only perform the multiply-add (MAC) or matrix multiplication operation, required by the convolutional layers, in the RNS representation, and use intermediate converters between number systems for the non-linear operations. More recently, completely RNS-based approaches have been proposed that eliminate the overhead introduced by these intermediate conversions to and from a traditional positional binary representation. § BFP FOR DNN ARCHITECTURES BFP representation offers a middle ground between FLP and FXP formats. This representation is proposed to preserve accuracy comparable to full precision FLP and hardware efficiency comparable to FXP. This is achieved by representing numbers with an exponent and a mantissa similar to FLP to guarantee a wide dynamic range. However, instead of representing each value separately, a group (called here a block) of values has a common exponent while maintaining private mantissas. Let N be a tensor that represent a block of t elements initially represented in FLP as N =(n_1, …n_i, …n_t), =((-1)^s_1 m_1 2^e_1, …(-1)^s_i m_i 2^e_i, …(-1)^s_t m_t 2^e_t). This block is represented with BFP format as Ǹ such that Ǹ =(ǹ_̀1̀, …ǹ_̀ì, …ǹ_̀t̀), =((-1)^s_1 m̀_̀1̀, …(-1)^s_i m̀_̀ì, …(-1)^s_t m̀_̀t̀)×2^ϵ_N, where ϵ_N is a shared exponent between the elements of block N, and m̀_̀ì is the aligned mantissa of element i such that m̀_̀ì = 𝐁𝐒(m_i,e_i-ϵ_N), where 𝐁𝐒 is the bit-shift operation. For large difference between the Private and shared exponents (e_i-ϵ_N), this shifting causes some of the least-significant bits of the mantissa to be truncated. The truncation happens frequently when there are many outliers in a block, which in turn depends on the size of the block and the way the shared exponent is selected. Since the dot product is the basic operation involved in DNN inference and training, the main target of BFP is to simplify the complex hardware required to perform this operation when FLP is used. For two blocks Ǹ_̀1̀ and Ǹ_̀2̀ represented in BFP, the dot product is calculated as Ǹ_̀1̀.Ǹ_̀2̀^T =((-1)^s_1,1 m̀_̀1̀,̀1̀, …(-1)^s_t,1 m̀_̀t̀,̀1̀)×2^ϵ_N_1. ((-1)^s_1,2 m̀_̀1̀,̀2̀, …(-1)^s_t,2 m̀_̀t̀,̀2̀)^T×2^ϵ_N_2 =2^ϵ_N_1+ϵ_N_2 ∑_i=1^t (-1)^s_i,12 m̀_̀ì,̀1̀×m̀_̀ì,̀2̀, where s_i,j and m_i,j are the sign and mantissa of the i^th element in the j^th block, respectively, ϵ_N_j is the shared exponent of the j^th block, s_i,12 results from XORing s_i,1 and s_i,2, and T stands for transportation. Equation (<ref>) shows that the dot product of two blocks of size t represented in BFP involves t FXP multiplications of mantissas, t-1 FXP additions of the products, and one addition of the two shared exponents. The additional overhead compared to FXP representation comes from the hardware required to handle the shared exponent which mainly depends on the number of the blocks <cit.>. As a result, the performance of DNN in presence of BFP representation is determined by block partition scheme, shared exponent selection, and the bit-width of the mantissa and shared exponent, which will be discussed next. §.§ BFP Block Design Determining how the blocks are partitioned is essential to achieving good DNN performance with BFP <cit.>. Usually, the input activation of each layer is considered as one block, whereas the weight matrix needs a specific scheme to be divided into blocks. There are two known blocking approaches, filter-based blocking <cit.> and tile-based blocking <cit.>, illustrated in Figure <ref>(a) and Figure <ref>(b), respectively. In the filter-based blocking, each filter of weights along the input channels is considered a block. Then the total number of blocks equals to the number of filters. This blocking is usually called coarse-grain blocking and it is the most hardware-friendly blocking approach as the accumulation of each output activation is done with the same shared exponent. Thus, it can be done using the FXP arithmetic <cit.>. However, this approach may end up with severe accuracy degradation due to the increased number of outliers that need to be truncated within these large blocks. On the other hand, the tile-based blocking is proposed to strike a compromise between accuracy and hardware efficiency. This approach relies on breaking large matrices of the filters down into small tiles to fit into limited hardware resources. Each tile is considered as a block with a shared exponent. The size of these tiles is a metric that need to be optimized. For example, a large tile of size 576 is used in <cit.> which requires a 12-bit mantissa to obtain acceptable accuracy. However, the authors in <cit.> showed that 12-bit FXP can achieve similar accuracy with simpler hardware implementation. This indicates that BFP may has no advantage over FXP for such large tiles. Smaller tiles of 16 elements are used in <cit.> seeking better accuracy, but with an added hardware complication comes from the need to convert to FLP before the accumulation. §.§ Shared Exponent Selection One shared exponent for each block need to be selected after partitioning the blocks[As the case of partitioning the weight blocks prior to DNN inference, which is usually performed offline.] and whenever a new block is created with multiple shared exponents. For example, this exponent is aligned after performing the calculation of each DNN layer as the calculation of the output activation usually ends up with a matrix of multiple exponents <cit.>. To this end, most of DNN accelerators that adopt BFP calculate the shared exponent dynamically during DNN training or inference. Static shared exponent selection can be utilized prior to DNN inference. One of two schemes is usually used for this dynamic shared exponent selection; maximal exponent-based or statistics-based schemes. The dynamic maximal exponent selection scheme is more popular <cit.>. In this scheme, for each block of (<ref>), different floating point numbers n_i are compared and the maximum exponent is selected as follows ϵ_N=max_e^i: i ∈ 1, …, t. To find this maximal exponent before performing the dot product between weights and activations result from previous layer, the output activations represented in BFP with several exponents need to be converted back into FLP, which adds large overhead on the performance and the resources. To keep the advantage of the dynamic calculation of the shared exponent while avoiding frequent conversion between BFP and FLP the statistics-based scheme is proposed to predict the shared exponent during DNN training <cit.>. In this scheme, the optimal exponent for each block is predicted based on statistics collected in the previous learning iteration. For example, in <cit.> the maximum value recorded within each block is stored for the last i iterations. Then, the maximum and the standard deviation of the stored values are used to calculate the shared exponent for the next iteration. This scheme works because the values within each block change slowly during the training. However, although this scheme avoids the conversion to FLP to calculate the exponent, some additional overhead is required to store the recorded statistics for each block. Thus, this scheme is suitable for the case when the number of blocks is relatively small. The static shared exponent scheme is presented to get rid of exponent calculation overhead when the BFP is employed for CNN inferences rather than training <cit.>. Instead of dynamically calculating the shared exponent during run-time, the shared exponent can be set to a constant value estimated offline. The common approach to determine the shared exponent offline is to minimize the Kullback–Leibler (K-L) divergence <cit.> between FLP32 distribution and BFP distribution of all blocks before the inference. By doing so, the extra memory and computational resources used for the exponent and the conversion between BFP and FLP are eliminated <cit.>. Because the input and output activations may have different shared exponents, a bit shifting is needed after each layer calculation, Figure <ref>(c). Figure <ref> summarizes the dataflow of the BFP when each of the three shared exponent determination schemes is adopted. §.§ BFP Precision The precision of BPF is determined by the number of bits allocated for both the shared exponent and mantissa. Reducing this precision is an objective to increase the arithmetic efficiency and memory bandwidth. At the same time, the over-reduced bit-width of the mantissa results in what is known as zero setting problem <cit.>. This problem occurs when all the bits of the mantissa are shifted out resulting in a zero number representation, despite the presence of the exponent value. The over-reduction of the shared exponent number of bits is much worse. This is because of insufficiency to represent the actual exponent of the block, and thus the caused truncation ruins the correct representation of all numbers in the block. This precision is usually either static <cit.> or dynamic <cit.>. In the static precision, the number of bits is fixed and selected offline. To select the best precision, usually few experiments are performed using different number of bits <cit.>. This gives an insight on the impact of this metric on the performance of DNN and allows for picking the minimum number of bits that preserve acceptable accuracy or the one that gives the best trade of between hardware efficiency and accuracy. Reducing the mantissa bit-width was paid attention in the literature because the performance of DNN is less sensitive to mantissa reduction compared to shared exponent. For example, 23-bit mantissa, same as the case of FLP, is required to guarantee the convergence of the Q-learning in <cit.>, whereas 8-bit mantissa, or even less, was found to be sufficient for other CNN accelerator designs <cit.>. This indicates that the required static precision depends on the problem to be solved (mainly, the used dataset and DNN model). The dynamic precision of mantissas is presented in <cit.>. This dynamic precision is basically needed when the implemented DNN architecture is intended to be used for training rather than inference. This is attributed to the fact that the distributions of the weights, activations, and weight updates change during the training. Figure <ref> <cit.> is an example of how the distribution changes during DNN training (at the start of the training and after 164 epochs). To speed up the training, the authors in <cit.> proposed an adaptive training by changing the precision of BFP progressively across both training iterations and layer depth. This relies on the fact that the training is more amenable to low-precision in its early stages. In their approach, two levels of precision are supported, mainly 4-bit and 2-bit mantissas. For each block, the relative improvement due to using the higher precision is estimated by quantizing the block numbers using both precisions. Then, if this relative improvement is higher than a threshold the higher precision is used. This threshold differs based on the layer depth and training iteration. On the other hand, mixed dynamic precision of BFP is proposed because the distribution of the weight updates (gradients) changes more frequently than other variables during training <cit.>. This scheme assigns different, higher, precision to the weight updates compared to weights and activations. At the same time, their implementation supports adjusting this precision online during the training time to be one of the two levels (e.g., 4-bit or 8-bit mantissa). For each training iteration, the number of zero setting problem occurrences is tracked. If this problem happens more frequent than a predefined threshold, this indicates that the current precision is not sufficient and should be increased in the next training iteration. To avoid the fluctuation in the precision, a hysteresis controller is utilized by specifying two thresholds, upper and lower, for increasing and decreasing the precision, respectively. This dynamic precision showed no accuracy degradation with 16% speed-up compared to the static precision. However, the dynamic precision advantage usually comes at the expense of added complication to the design of the hardware which should be reconfigurable with multi-mode arithmetic to adapt according to the selected precision. §.§ Summary and Discussion of BFP-based DNN architectures The main idea behind BFP representation is to strike a balance between the wide dynamic range but hardware inefficient FLP format and the limited-range hardware-friendly FXP format. BFP can be considered as a general format that has two extreme cases, i.e., the FLP case when each value is set in a separate block and the FXP case when the whole values of the architecture are treated as a single block with one shared exponent. Thus, different trade-offs can be obtained by specifying different design choices represented by the block size, shared exponent selection, and bit-width choice. Various CNN architectures that utilize BFP representation are listed in Table <ref>. The first observation from this table is that even though BFP was initially proposed to implement efficient hardware capable of performing CNN training phase without ruining the accuracy, this representation got the same amount of attention for highly accurate inference hardware implementation. Most of these architectures achieved negligible accuracy degradation compared to FLP even with less than 8-bit mantissa <cit.>. Different implementations make use of different combinations of the discussed design choices, thus, the reported results of these works can't be used to prove the superiority of a specific design choice over the others. However, we can conclude that there is no clear trend in the accuracy enhancement when tile-based blocking is used instead of a filter-based one. § DFXP FOR DNN ARCHITECTURES DFXP representation shares the same concept of BFP discussed in Section <ref> and sometimes the notations DFXP and BFP are used interchangeably. As in the case of BFP, in DFXP, the values are grouped and different scaling factors (i.e., shared exponents) are used for different groups. Thus, a scaling factor is unique for each group (e.g., layer). In some cases, it can be changed from time to time (i.e., dynamic). This is compared to the case of FXP which assigns a single global scaling factor for the whole DNN architecture all the time. To this end, Equations (<ref>,<ref>,<ref>) are applicable to DFXP. Although several works use the term DFXP to indicate a representation similar to BFP <cit.>, the majority of works use DFXP to indicate FXP representation provided with flexibility to change the place of the decimal point, that specifies the length of the integer and fraction parts for each group of values, Figure <ref>. This requires that a scaling factor ϵ_N of a group N, (<ref>), to be in the range [-w_N,0], where w_N is the bit-width used to represent elements of a group N <cit.>. Hence, DFXP representation can be reduced to <I_N,F_N> format where I_N, F_N are the number of bits allocated to the integer and fractional parts, respectively, for all values within a group. Such that w_N=I_N+ F_N and ϵ_N=-F_N. Thus, the zero setting problem frequently happens with BFP will not appear for the DFXP at the expense of limited dynamic range, but still better than the one of FXP. We will limit our discussion on these works in this section, whereas the other works that use DFXP notation to indicate BFP are discussed in Section <ref> although they use the term DFXP. A notable difference between DNN architectures that use BFP and DFXP is that the latter gives less attention to the way that the groups (i.e., blocks) are partitioned. The common grouping approach for DNN architecture based on DFXP is to consider the weights, biases, input activation, and gradients vectors (when DFXP is used to accelerate training) for each layer as separate groups and thus associated with different scaling factors <cit.>. Only one architecture presented in <cit.> statically clusters the filters (i.e., weights) that accumulate to the same output activation of each layer. Then, each cluster represents a group that has its unique scaling factor. The quantization error is effectively reduced with smaller clusters (e.g., when a cluster contains 4 filters) since smaller groups tend to have smaller range of values. The main differences between the DFXP representation in different works are the way of finding the best scaling factor F_N and determining the bit-width w_N. The approaches used to optimize the decimal point Position and specify the precision of DFXP are classified in the next subsections. §.§ Group Scaling Factor Selection The scaling factor (i.e., F_N) assignment to each group in DFXP is usually performed in an offline or online manner. The offline assignment is usually used when the architecture is implemented for inference purpose <cit.>. The common approach for the offline assignment depends on finding the minimum integer bit-width I_N that accommodates the maximum value within a group as in I_N= log_2(max(|n_max|,|n_min|)), where n_max, n_min are the maximum and minimum values within a group N. The remaining bits w_N-I_N are allocated to the fractional part F_N. This approach is used for example in <cit.>. However, as the presence of outliers in a group results in an unnecessary increase in the integer bit-width, the outliers can be excluded before calculating the bit-width I_N <cit.>. Several works minimize the impact of the outliers by selecting a scaling factor that minimizes the error between computed and real values <cit.>. For instance, K-L divergence between FLP32 and DFXP weight distributions is used in <cit.>, whereas a greedy algorithm is utilized in <cit.> to determine the best scaling factor. The online scaling factor selection is needed for the training phase in which the values within each group change frequently <cit.>. Usually, the scaling factor is updated at a given frequency based on the rate of overflow during the training. When the current integer part fails to handle a value in a group, the overflow rate increases. The overflow rate is compared to a threshold to decide whether this scaling factor should be increased or decreased. This threshold can be deterministic and predefined <cit.>, or stochastic <cit.>. The stochastic thresholding is presented because the lower deterministic threshold results in inaccurate representation for small values while the higher threshold causes large clipping error <cit.>, Figure <ref>. The random shuffling between higher and lower thresholds is found to be effective in compensating for the accuracy degradation of the low-precision training (less than 6 bits). §.§ DFXP Precision The bit-precision of DFXP (i.e., w_N) can be static, mixed, or dynamic with different trade-offs between accuracy and hardware efficiency. The static precision, which is used in <cit.>, indicates that the number of bits is statically specified prior and is kept fixed for all groups during the training or inference, i.e., w_N_i=w_c for i= 1, …, N_t, where N_t is the total number of groups associated with a specific DNN architecture. The advantage of this scheme is its simplicity from the hardware efficiency point of view. However, the selected precision is not optimal for all groups, layers, and architectures <cit.>. On the other hand, in the mixed-precision scheme, the bit-width, which is determined offline as well, can be different for different groups <cit.>. The need for mixed-precision mainly comes from the fact that different groups (such as weights and activations) have different required dynamic ranges and thus different required number of bits <cit.>. As the activation results from the convolution accumulation, it is usually allocated more bits. For instance, using DFXP with 4-bit wights and 8-bit activations gives an accuracy degradation within 2% of the full precision using the Resnet-50 CNN model on the ImageNet dataset <cit.>. In other works, different precision is allocated to different groups in different layers <cit.>. The authors in <cit.> stated that a specific fully connected layer activation is more sensitive to bit reduction and it is better to be allocated 16 bits while the activation bit-width of the other layers can be shrink to 8 bits. This mixed-precision allows them to achieve a 55.64% saving for weights' storage and 69.17% for activations’ memory traffic with less than 2.5% loss in the accuracy when the Alexnet model and ImageNet dataset are used. The experiments in <cit.> show similar results. They found that the groups in shallower layers are less robust to bit reduction than the ones in deeper layers. In addition, the computation of the first and the last network layers should use high bit-precision to achieve better performance. To optimize the mixed-precision for different groups and to reach the above conclusions, the authors in <cit.> adopted an iterative bit-precision reduction scheme that aims to discover the groups for which the bit precision can be reduced without causing noticeable performance degradation. When DFXP with mixed-precision is used for training, sometimes different bit-widths are used for the weights during the updates than during the forward and backward propagations <cit.>. Using higher precision for the weight updates allows for the small changes in the weights to be accumulated precisely. The use of DFXP with dynamic precision is presented to adjust the bit-width on-the-fly during training to enable speeding up this process <cit.>. The scheme in <cit.> suggests starting with an aggressive initial target bit-width and monitoring the training loss as a feedback from the training process. If the training becomes unstable, the bit-width is increased to its maximum value. Afterward, the target bit-width is gradually increased by a unit step for the next trial. This procedure is repeated until reaching the minimum target bit-width that allows for stable training. To maintain the low overhead of this algorithm, it is activated once after each forward/backward computation to find the global bit-width of DNN architecture. A simpler search-based scheme to adapt the bit-width of each layer is suggested in <cit.>. In this scheme, the convolution is calculated in presence of low and high bit-widths at the same time for several iterations per epoch. If the difference between the high and low precisions is higher than a predefined threshold, the bit-width increases starting from the next iteration till the end of the epoch. After applying this scheme to different datasets and different CNN models, an interesting conclusion was that different datasets require different average bit-widths even if the same model is used. One added complication of utilizing the dynamic bit-width is the need to design a configurable processing unit that can be configured to compute with various bit-widths during run-time. Thus, the efficiency of the dynamic precision scheme is highly affected by the hardware’s supportability of the bit-width levels. Two relatively high bit-width levels (32 bits and 64 bits) are adopted in <cit.>. The baseline precision to prove the efficiency of their proposed approach is 64 bits which is relatively high training precision compared to other works. On the other hand, <cit.> could train the CNN with negligible loss of training and testing accuracy using an average bit-width less than 8. This is because they were able to use finer bit-width levels thanks to the bit-slice serial architecture they proposed. §.§ Summary and Discussion of DFXP-based DNN Architectures DFXP and BFP are very similar representations. DFXP can be considered as a subset of BFP with less dynamic range and less hardware complication at the same time. For example, when the DFXP representation is used for CNN inference, the only additional hardware required over the FXP is a simple bit-shifter to align the output activation with the scale factor of the next layer input activation <cit.>. This simplicity makes it appealing for many DNN architectures <cit.>. By considering the number of accelerators in the literature that utilize each representation, DFXP can be considered the most widely used alternative number system. The widespread of DFXP can be attributed to its simplicity and to implementing it in some of publicly available DNN frameworks, such as Ristretto <cit.>. Several of the DFXP-based DNN architectures used this representation without much contribution to the proposed vanilla DFXP. Other works used different approaches to select the scaling factor of each group and to optimize the bit-width of this representation. Different approaches used for these design metrics are discussed and compared above. § POSIT FOR DNN ARCHITECTURES Posit number system, also known as type III universal number (Unum) system <cit.>, is a floating-like format that is proposed to overcome several shortcomings of FLP representation <cit.>. Compared to FLP, Posit uses the bits more efficiently (allows for better accuracy with the same number of bits) <cit.>, and has better accuracy and dynamic range <cit.>. Figure <ref> illustrates Posit representation. The w bits Posit number representation consists of four fields; a sign (1 bit), a regime (of variable length rs ∈ [1,w-1]), an exponent e (unsigned integer of fixed length es) and a mantissa (of variable length ms=w-rs-es-1). The regime field contains d consecutive identical bits and an inverted terminating bit (i.e., rrr …r̅)[This is the general case when rs<(w-1). Otherwise, the regime pattern can be rrr …r when it is terminated by the end of the w bits <cit.>. ]. The numerical value of a real number n is represented in the Posit format (by n̂ ) as follows: n̂ =(-1)^s × u^k × 2^e × (1+m/2^ms), where s, e, and m are the values of the sign bit, exponent and mantissa, respectively. The useed u, and k are calculated in (<ref>) and (<ref>) with the same order. u= 2^2^es, k = {[ -d, if r=0.; d-1, if r=1, ]. Posit representation is commonly characterized by two parameters, mainly w and es, and defined as Posit(w,es) <cit.>. The parameter es is used to control the trade-off between the precision and the dynamic range <cit.>. When the Posit is intended to be used for DNN, these parameters are usually specified in an offline manner regardless of whether this architecture targets DNN training or inference <cit.>. The selection of these parameters is done usually by experimenting with different parameters and selecting the parameters that give the best accuracy <cit.> or the parameters that offer the best balance between the accuracy and hardware efficiency. For instance, when the exponent length is set to es=1 in <cit.> a better trade-off between accuracy and energy-delay-product is obtained for w=7 and w=5. On the other hand, the author in <cit.> decided to eliminate the exponent part (i.e., es=0) as the Posit, in this case, better represents the dynamic range of the used DNN weights. There are two main differences between Posit and FLP representations, Figure <ref> and <ref>. The first difference is the presence of the regime field, and the second is the variability of the mantissa bit-length. Indeed, the innovation in the Posit format comes from its ability to allocate more bits to the mantissa when the represented number is very small (i.e., higher precision) and fewer bits for large numbers (i.e., larger magnitude) without changing the total bit-width of the format <cit.>. The Posit is usually known for its tapered-accuracy, i.e., small magnitude numbers around the '1' have more accuracy than extremely large or extremely small numbers <cit.>. The authors in <cit.> compared the decimal accuracy ( -log_10|log_10 (x̂/x)|, where x is the actual real number value and x̂ is the represented number value <cit.>) of different Posit representations to the FLP8 and FXP8, see Figure <ref>. Their experiment showed that: i) the FXP representation has a peak accuracy so it is suitable to represent data with a narrow range, ii) the floating point has almost constant accuracy and it should be used to represent data that are uniformly distributed to exploit its efficiency, iii) and the Posit has tapered accuracy which makes it suitable to represent the normally distributed data efficiently. Since data in DNNs usually are normally distributed, see for example Figure <ref>, Posit is expected to be the most attractive number system for DNN <cit.>. DNN architectures that use Posit number system are usually either rely on Posit format from end-to-end <cit.> or partially utilize this format and a conversion from and to other formats is required within this architecture <cit.>. These two approaches of using Posit are discussed next. In addition, to increase the efficiency of Posit number system for DNNs, several Posit variants are proposed. These variants are reported below as well. §.§ End-to-end Posit-based Architectures When DNN data are represented in Posit from end-to-end new hardware that is able to perform all operations on these data must be used. In this case, the most fundamental arithmetic operations that need to be carefully designed in hardware are MAC operation and activation functions <cit.>. Different designs of the Posit-based MAC (or multiplier) are proposed in <cit.>. In most of these works, the MAC design mainly follows the standard FLP MAC as in <cit.>. The main additional steps over the FLP MAC design are the decoding to extract Posit fields of the operands and encoding the result to Posit format <cit.>. Indeed, Posit MAC hardware implementation is more complicated and less efficient than the FLP MAC with the same number of bits because of the length-variability of the regime and mantissa fields. It is shown in <cit.> that Posit(32, 6) multiplier has 78% more area and consumes 94% more power than the FLP32 multiplier. This is attributed to the fact that the multiplier should be designed to handle the extreme lengths of mantissa, which is w-es-2, and regime, which is w-1. In addition, the critical path of this Posit multiplier is found to be longer than FLP32 due to the sequential bit decoding required for Posit. By making the fields of Posit format fixed, the area and power efficiency increased by 47%, and 38.5%, respectively, over the variable length fields Posit at the expense of negligible accuracy loss. Similar results are shown in <cit.> as well. Alternatively, to design a more power and area-efficient multiplier, the authors in <cit.> proposed a Posit-LNS-Approximate multiplication. This combination allows for exploiting the advantages of Posit accuracy and LNS hardware efficiency. The general concept of performing LNS multiplications is similar to Mitchell's approximation discussed in section <ref>, however, by considering Posit format instead. For example, the logarithm of a Posit number is given in (<ref>) by taking the logarithm of both sides of (<ref>) and applying the approximation in (<ref>). log_2(|n̂|) = 2^es× k+e+ m/2^ms. Consequently, Posit multiplication is performed using fixed point addition. The experiments in <cit.> showed significant reduction in the multiplier area by 72.86%, power by 81.79%, and delay by 17.01% compared to Posit multipliers in <cit.>. The implementation of several activation functions of Posit represented data is discussed in <cit.>. The Sigmoid activation function in (<ref>) is found to be easy to be implemented in hardware for Posit represented data <cit.>. Few simple bit-cloning and masking is adequate to approximate this function. Similarly, a fast implementation of the tanh and the Extended Linear Unit (ELU) activation functions are presented in <cit.> and <cit.>, respectively. §.§ Partial Posit-based Architectures Several architectures aimed to benefit from the high accuracy and dynamic range of Posit while avoiding its hardware inefficiency by representing only the weights with Posit prior to the inference process <cit.>. This enables significant decrease in both the storage and communication overheads. These weights are then converted back to another format, such as FLP in <cit.> or FXP in <cit.>, during the computation. The only overhead over the hardware of the standard architectures are modules to convert from Posit to the other formats and vice versa. The penalty of converting Posit to FXP is the increase in critical path delay and power consumption of the MAC by 22.8% and 5%, respectively <cit.>. §.§ Posit Variants Two Posit variants are proposed for DNNs; the fixed-Posit representation <cit.>, and the generalized Posit representation <cit.>. As its name indicates, the fixed-Posit representation proposes using a fixed length of the regime rs=constant instead of using a variable length in the vanilla Posit. Although the dynamic range and the accuracy of this representation are expected to be less than that of Posit, using this representation results in much more efficient hardware, in terms of power, area, and delay, with negligible loss in classification accuracy (0.12 %) when it used for ResNet-18 on ImageNet <cit.>. The generalized Posit representation <cit.> proposed a modification to the vanilla Posit format to better represent the dynamic range and data distribution of DNNs. They relied on the fact that Posit with w<8 and a specific es is observed to be unable to accommodate the variability in parameter distributions and dynamic range of different DNN layers and various DNN models. Instead of using mixed-precision Posits which requires a very large search space (as huge as 4^110 for ResNet-110 when 4 different w values are searched <cit.>), Posit format is modified by inserting two hyper-parameters that can be adjusted per-layer to enable a parameterized tapered accuracy and dynamic range. These two hyper-parameters are the exponent bias and the maximum regime bit-width that can be applied by replacing e in (<ref>) with e+sc, where sc is the exponent bias, and restricting the number of bits allocated to the regime rs≤ rs_max. The exponent bias is used to scale the zone of maximum accuracy (i.e., minimum and maximum magnitude values) downward or upward in order to track the data distribution of different layers. The maximum regime bit-width rs_max controls the maximum and minimum Positive representable values. When rs_max=1, the generalized Posits becomes a FLP-like format, whereas it turns into vanilla Posit format with rs_max=w-1. Various tapered-precision representations can be obtained by selecting the rs_max between these two bounds. The experimental results on several datasets and CNN models showed that the generalized Posit offers considerable accuracy improvement when w<8 bits compared to the vanilla Posit at the expense of a relatively moderate increase in energy consumption. §.§ Summary and Discussion of Posit-based DNN Architectures Posit representation can be considered as a variant of FLP. This representation offers better accuracy and wider dynamic range than FLP. Thus, Posit can represent DNN data more efficiently with the same number of bits. However, in general, the hardware implementation of Posit is found to be more complicated compared to FLP hardware, as it relies on the FLP hardware in addition to the hardware needed to convert from and to FLP. Several trials have been made to enhance Posit hardware efficiency discussed above such as combining Posit with other representations (FXP and LNS) or modifying Posit by fixing or limiting the regime field. § FUTURE DIRECTIONS AND OPEN RESEARCH ISSUES Next, we briefly highlight several issues and opportunities for future research in DNN number systems. This includes dynamic number representations, hybrid number systems, and utilization of DNN statistics . §.§ Dynamic Number Systems The main challenge of using low-precision number systems for training DNNs is the dynamic distribution of weights, activation, and gradients during training. In addition, several works show that optimal parameters of the number system (e.g., bit-widths) can be different for different datasets. This makes a dynamic number system (i.e., a number system that can adjust its parameters either offline or during run-time) highly desirable, especially for training DNNs. However, implementing such a system with online adaptation adds complications to the hardware which should be re-configurable to adapt to the changes in the number system format. Several works that adopt a format with a dynamic bit-width, for example <cit.>, discussed the worthiness of this approach from a software (accuracy and speed gain) point of view. it seems worthy to investigate the effectiveness of a dynamic number system from the hardware efficiency perspective. §.§ Hybrid Number Systems Several hybrid number systems have been investigated. Some examples of hybrid representations include DFXP with binary FXP <cit.>, DFXP with ternary FXP <cit.>, DFXP with FLP <cit.>, dual DFXP with DFXP <cit.>, FXP with Posit <cit.>, BFP with LNS <cit.>, Posit with LNS <cit.>, and RNS with LNS <cit.>. Combining two number systems allows for gaining from the benefits offered by both systems. The hybrid representations are found to be more efficient, a hardware and accuracy point of view, than using each representation separately. More combinations of these representations can be investigated in the future. For example, applying the same concept of BFP (i.e., each block shares the same exponent) to Posit number system is expected to relieve the hardware complication compared to the vanilla Posit number system. §.§ Utilization of DNN Characteristics DNN has special characteristics that should be considered when searching for more efficient representations dedicated to DNNs. For example, the ability of the neural networks to tolerate the noise is exploited in <cit.> to design an efficient LNS multiplier by reducing the average rather than the absolute error introduced by the multiplier. This results in enhancing the accuracy of DNN instead of ruining it as would be anticipated when using approximate multipliers. Another example of utilizing the noise tolerance of DNNs is using stochastic rounding (i.e., rounding the number up or down at random) when the real number is mapped to a specific representation. This kind of rounding allowed for training DNNs with lower precision when it is integrated with FXP <cit.>, BFP <cit.>, Posit <cit.>, or DFXP <cit.>. Similarly, the ability to cluster DNN data into groups with narrower dynamic ranges gave birth to BFP and DFP representations. Moreover, realizing that DNN data are normally distributed shed light on the effectiveness of using the Posit number system, which has tapered accuracy. For future work on DNN number systems, these and other DNN characteristics should be paid attention to achieve more efficient representations. § SUMMARY AND CONCLUSIONS Deep neural networks have become an enabling component for a myriad of artificial intelligence applications. Being successful in providing great performance and even exceeding human accuracy, they have attracted the attention of academia and industry. The great performance of DNNs comes at the expense of high computational complexity and intensive memory requirements. Thus, increased attention is paid to redesigning DNN algorithms and hardware, in an effort to enhance their performance and/or enable their deployment on edge devices. A research direction that has a great impact on the performance of DNNs is their number representation. A great body of research has been focused on finding more suitable number systems, than FLP and FXP, tailored for DNNs. The standard FLP representation has a massive dynamic range which makes it a good choice for computationally intensive algorithms that include a wide range of values and require high precision. At the same time, the complex and power-hungry FLP calculations make it less attractive for DNN architecture implementation. On the other hand, the FXP for DNN implementation offers great hardware efficiency at the expense of accuracy degradation. Between the two extreme representations (FLP and FXP), there are several number systems that are used for DNNs and offer different trade-offs between energy efficiency and acquired accuracy. The surveyed alternative number systems for DNNs are LNS, RNS, BFP, DFXP, and Posit number systems. The main objective of using LNS is to simplify the implementation of the costly multiplication operation and have a multiplication-free DNN accelerators. This hardware simplification allows for significant savings in the area, power consumption, and cost, with some accuracy degradation [This is the common case. However, several works that adopted LNS showed no accuracy degradation. See Table <ref> and Table <ref>.] resulting from logarithmic approximation. This makes LNS a good choice when DNNs are deployed on source-constrained devices for accuracy-resilience applications. The RNS can be an attractive number representation choice for DNN accelerators. RNS exhibits inherent parallelism at the residue-processing level. It utilizes parallel computations along separate residue channels, where operations in each of them are performed modulo a specific modulus, with no need for information to be shared between residue channels. The main challenge in designing an efficient RNS-based accelerator is to minimize or, possibly, eliminate the overhead introduced when implementing the non-linear DNN operations. Another key factor is the optimization of the moduli selection and the corresponding arithmetic circuits, to meet the accuracy requirements. The BFP strikes a balance between FLP and FXP format. Consequently, different trade-offs can be obtained by specifying different BFP design choices represented by the block size, shared exponent selection, and bit-width choice. Most of the surveyed DNN architectures that depend on BFP achieved negligible accuracy degradation compared to FLP even with less than 8 bits, with varying levels of speed, power, and area efficiency. DFXP can be considered as a subset of BFP with less dynamic range and less hardware complication at the same time. While BFP is closer to FLP, the DFXP is more like the FXP (as their names indicate). This results in different trade-offs between DNNs metrics (accuracy, power consumption, speed up, etc.) Finally, Posit representation can be considered as a variant of FLP. offering better accuracy and a wider dynamic range, Posit can represent DNN data more efficiently with the same number of bits as FLP. This allows for more reduction in the number of bits compared to FLP implementations with similar accuracy. However, Posit has complex hardware, due to the hardware needed to convert Posit numbers to another number system (basically FLP) to do the arithmetic operations in the other domain before returning back to Posit domain. The efforts made to enhance its hardware efficiency have been discussed in this survey. For all aforementioned alternative number systems, their impact on the performance and hardware design of DNN has been reported in details. In addition, this article highlighted the challenges associated with the implementation of each number system and the different solutions proposed to address these challenges. § ACKNOWLEDGMENTS This work was supported by the Khalifa University of Science and Technology under Award CIRA-2020-053. IEEEtran
http://arxiv.org/abs/2307.04004v1
20230708161850
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction
[ "Harnaik Dhami", "Vishnu D. Sharma", "Pratap Tokekar" ]
cs.RO
[ "cs.RO", "cs.MA" ]
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction Harnaik Dhami* Vishnu D. Sharma* Pratap Tokekar *Equal contribution. Names are listed alphabetically. Authors are with the Department of Computer Science, University of Maryland, U.S.A. .This work is supported by the ONR under grant number N00014-18-1-2829. August 12, 2023 ===================================================================================================================================================================================================================================================================== We propose MAP-NBV, a prediction-guided active algorithm for 3D reconstruction with multi-agent systems. Prediction-based approaches have shown great improvement in active perception tasks by learning the cues about structures in the environment from data. But these methods primarily focus on single-agent systems. We design a next-best-view approach that utilizes geometric measures over the predictions and jointly optimizes the information gain and control effort for efficient collaborative 3D reconstruction of the object. Our method achieves 22.75% improvement over the prediction-based single-agent approach and 15.63% improvement over the non-predictive multi-agent approach. We make our code publicly available through our project website: <http://raaslab.org/projects/MAPNBV/> § INTRODUCTION Visual surveying and inspection with robots have been studied for a long time for a wide range of applications such as inspection of civil infrastructure <cit.> and large vehicles <cit.>, precision agriculture <cit.>, and digital mapping for real estate <cit.>. The utilization of robots in these applications is highly advantageous as they can access hard-to-reach areas with greater ease and safety compared to situations with direct human involvement. Recent work on making robots autonomous for these tasks make their use more appealing. This work focuses on one such long-studied problem of 3D object reconstruction <cit.>, where the objective is to digitally reconstruct the object of interest by combining observations from multiple vantage points. While it could be easier to achieve this in an indoor environment by carefully placing sensors around the object, the same can't be achieved for the outdoors and open areas. For the latter, the sensor(s), must be moved around the object to capture information from different viewpoints. This can be realized with sensors such as cameras and LiDARs mounted on unmanned aerial vehicles (UAVs). A UAV with unlimited power supply capacity could capture infinite observations for an almost perfect reconstruction of the object, but the real-world limitation of battery capacity adds another dimension to the problem: achieving an accurate 3D reconstruction as fast as possible. The trade-off between reconstruction accuracy and task duration in unknown environments is commonly addressed through Next-Best-View (NBV) planning, wherein a robot determines the optimal location for the next observation to maximize information gain. Numerous solutions have been proposed by the research community to tackle this problem, with a majority of them catering to single-agent systems <cit.>. However, deploying a team of robots instead of a single agent can enhance task efficiency multi-fold, while also offering additional benefits such as fault tolerance through redundancy. But the direct application of single-agent NBV methods to multi-agent systems does not translate well in terms of performance. This issue stems from the potential overlap between the individual observations. An efficient multi-agent NBV formulation requires coordination among robots to build a joint representation and minimize the overlap. In this work, we extend our previous work on prediction-driven single-agent NBV, Pred-NBV <cit.>, to a team of robots for 3D reconstruction to bring the advantages of the prediction-guided approach to a multi-agent system. We call this multi-agent prediction-based next-best-view method MAP-NBV. Pred-NBV <cit.> uses a 3D point cloud prediction network along with a geometric NBV approach while also considering the control effort required for object reconstruction. An important feature of Pred-NBV is that it doesn't require the partially observed point cloud to be centered at the full object center, an implicit assumption in many 3D reconstruction networks. Naively extending Pred-NBV to a team of robots would result in significant overlap as all the agents would move in the same direction to maximize individual information gain. This is inefficient as it would be more advantageous for the robots to move in different directions. MAP-NBV solves this issue by defining NBV measures over joint observation. We accomplish this by removing duplicate points in observations from multiple robots when calculating the information gain. Along with this, we account for the total control effort in our NBV objective, which results in efficient planning for the whole team. We make the following contributions in this work: * We propose a multi-agent, prediction-based NBV planning approach for active 3D reconstruction of various objects with a novel objective combining visual information gain and control effort. * We modify a single-agent baseline NBV algorithm based on <cit.> that uses frontier-based information gain, and extend its functionality to effectively operate in multi-agent settings. * We show that our method outperforms Pred-NBV <cit.>, a single-agent prediction-based algorithm, by 22.75% and the multi-agent version of a traditional NBV baseline <cit.> by 15.63%. We share the qualitative results and release the project code from our method on our project website[<http://raaslab.org/projects/MAPNBV/>]. § RELATED WORK The use of robots for data acquisition purposes is an extensively studied topic for various domains. Their usage range from infrastructure inspection <cit.> and environment monitoring <cit.> for real-world application to the real-world digitization for research datasets and simulations <cit.>. When the environment is unknown, active methods such as next-best-view (NBV) are used to construct an object model on the fly by capturing additional observations. A majority of the works on NBV planning use information-theoretic measures <cit.> for selection to account for uncertainty in observations <cit.>. The widely used frontier and tree-based exploration approaches also utilize uncertainty about the environment for guiding the robot motion <cit.>. Some works devise geometric methods which make inferences about the exact shape of the object of interest and try to align the observations with the inferred model <cit.>. Prediction-based NBV approaches have emerged as another alternative in recent years, where a neural network takes the robot and/or the environment state as the input and NBV pose or velocity as the output <cit.>. A majority of the existing work on NBV is focused on single robot systems. The task performance can be enhanced by adding more robots to the systems, but directly extending single-robot NBV approaches to multi-robot systems may result in sub-optimal performance due to significant overlap in observations. This issue led to the development of exploration algorithms specifically for multi-robot systems <cit.> with information-theoretic measures for determining NBV. Some recent works on multi-robot systems have explored the utilization of predictions for improvement in task efficiency. Almadhoun et al. <cit.> designed a hybrid planner that switches between a classical NBV approach and a learning-based predictor for NBV selection but uses a partial model obtained by robot observations only. Wu et al. <cit.> use a point cloud prediction model for plants to use the predicted point cloud as an oracle leading to better results than the traditional approaches. This method uses entropy-based information gain measures for NBV and is designed for plant phenotyping with robotic arms. These methods do not consider the control effort required which is important for UAVs with energy constraints when deployed for observing large objects such as airplanes and ships. Also, these works employ information theoretic NBV approaches. We aim to explore a prediction-based approach for geometric NBV selection. In this work, we extend Pred-NBV <cit.> which also uses point cloud prediction and build a multi-robot NBV planner. The prediction on the point cloud makes the pipeline modular and interpretable and can be improved by improving individual modules. We select NBV based on information gain, as well as control effort, making our approach more grounded in real-world limitations. § PROBLEM FORMULATION We are given a team of n robots, each equipped with a 3D sensor. The team flies around a closed object of volume 𝒱∈ℝ^3 and observes the point on its surface 𝒮⊂𝒱. The surface points s_i observed by the robot r_j from the view-point ϕ_k ∈Φ are represented as a voxel-filtered point cloud and the relationship between them is defined as s_i = f(r_j, ϕ_k). The robot r_j follows a trajectory ξ_r_j, consisting of multiple viewpoints, and keeps track of the points observed so far. The distance traveled by a robot between two poses ϕ_i and ϕ_j is represented by d(ϕ_i, ϕ_j). The point cloud observed by the team of robots is the union of the surface points observed by the individual robots over their respective trajectories, i.e., s_ξ = ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) and ξ represents the set of trajectories for each robot, i.e., ξ = {ξ_r_1, ξ_r_2,..., ξ_r_n}. The objective is to find a set of feasible trajectories ξ^* = {ξ_r_1^*, ξ_r_2^*, ..., ξ_r_n^*}, such that the team observes the whole voxel-filtered surface, while also minimizing the total distance traveled by the robots on their respective trajectories. ξ^* = _ξ∑_i=1^n ∑_j=1^| ξ_r_j| - 1 d(ϕ_j, ϕ_j+1) such that  ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) = 𝒮 Given a finite set of trajectories, if 𝒮, the object model is known, the optimal set of trajectories can be found with an exhaustive search. As the object model is not known apriori in an unknown environment, the optimal solution can not be found beforehand. Thus, each robot needs to move based on the partial observations of the team to determine the NBV to reconstruct the object's surface. Here we assume that each robot can observe the object at the start of the mission, which can be accomplished by moving the robots till they see the object. In this work, we define this problem in a centralized form; all the robots share their observations with a central entity that prescribes the NBV for each by solving the aforementioned objective. § PROPOSED APPROACH In this paper, we present Multi-Agent Pred-NBV (MAP-NBV), a model prediction-guided NBV approach for a team of robots. Figure <ref> shows the overview of our process, which consists of two parts: (1)3D Model Prediction, where we combine the observations from all the robots to build a partial model of the object and use PoinTr-C <cit.>, a 3D point cloud completion network, to predict the full shape of the objects, and (2) Multi-Agent NBV Algorithm, which uses the partial model and the predicted model to determine the NBV for the team, while trying to minimize the distance traveled. Our NBV solution performs a greedy selection over the candidate points to generate the trajectory, which also reduces the computation complexity. The following subsections provide further details of our approach. §.§ 3D Model Prediction To start, the target object is segmented out from the rest of the environment in the captured RGB images for each UAV. This allows the algorithm to focus on only the target infrastructure as opposed to also including other obstacles. Then, each of these segmented images is aligned with the captured depth image per UAV to segment the target object out. Point clouds are then generated per each segmented depth image. This gives us a point cloud per each UAV that contains points belonging only to the target object. Assuming a centralized system, each segmented point cloud per UAV is transformed into a central reference frame and concatenated together into a singular point cloud. This point cloud represents the entire multi-agent system's observations of the target object at the current timestamp. The point cloud concatenation can be replaced with a registration algorithm <cit.>, but we use concatenation due to its ease of use. Lastly, this current timestamp's point cloud is then concatenated with previous observations to get an up-to-date observation point cloud. This process is shown in Figure <ref>. In order to get an approximation of the 𝒱̂ of the full model 𝒱, we use PoinTr-C <cit.> a 3D point cloud completion network, developed by fine-tuning PoinTr <cit.> using curriculum learning over ShapeNet dataset <cit.>. Unlike PoinTr and similar point cloud completion networks, PoinTr-C doesn't make implicit assumptions about the knowledge of the center of the full model by fine-tuning over rotationally and translationally perturbed point clouds. Relaxing this assumption makes PoinTr-C more suitable for inputs from an unknown environment than PoinTr. The 3D point cloud of the object obtained as the union of the observed surface points goes as input to PoinTr-C and it predicts the full object point cloud 𝒱̂. PoinTr-C was trained over isolated point clouds and therefore requires object point clouds to be isolated from the scene. This can be realized with the help of distance-based filters and state-of-the-art segmentation networks<cit.> without any fine-tuning. An example of an input point cloud and a predicted point cloud is shown in Figure <ref>. §.§ Next-Best View Planner We use the predicted point cloud as an approximation of the ground truth point cloud for NBV planning. For this, we first generate a set of candidate poses around the partially observed object. From these, we select a set of n poses, corresponding to each robot, based on information gain and control effort. The information gain for the set of n viewpoints is defined as the number of new, unique points expected to be observed after the robots move to these viewpoints. The control effort is defined as the total distance covered by the robots in moving to the viewpoints. The number of new points varies in each iteration since the robots observe more of the surface of the object as they move to new locations. While PoinTr-C predicts the point cloud for the whole object, the robots can observe only the surface points. Hence, before counting the number of new points, we apply hidden point removal <cit.> to the predicted point cloud. We represent this relationship between the number of points observed and the trajectories traversed till time t by I({ξ_t), where ξ_t = {ξ_r_1, ξ_r_2, ..., ξ_r_n}_t represents the set of trajectories for all the robots till time t. To balance the information gain and control effort, we use a hyperparameter τ which is kept fixed throughout an episode. The robots select the candidate to pose set which results in at least τ% of the total possible information gain over all candidate poses. Thus, we formulate our multi-agent NBV objective as follows. {ϕ_r_1, ϕ_r_2, ..., ϕ_r_n}_t+1 = _ϕ∈𝒞∑_i=1^n d(ϕ_r_i, ϕ_r_it)  such that  ⋃_i =1^n I(ξ_r_it∪ϕ)/max_ϕ∈𝒞⋃_i =1^n I(ξ_r_it∪ϕ)≥τ In our experiments, we implement the information gain by first isolating the predicted points that can be observed from a given set of viewpoints and then taking a union of such points from each agent to identify the unique points in the joint observation. The number of the points thus obtained is used as the information gain. For finding the control effort, we use RRT-Connect <cit.> to find the path between a robot's current location to each candidate pose. The candidate poses are generated similar to Pred-NBV <cit.>, i.e. on circles at different heights around the center of the predicted object point cloud. One circle is at the same height as the predicted object center with radius 1.5 × d_max, where d_max is the maximum distance of a point from the center of the predicted point cloud. The other two circles are located above and below this circle 0.25 ×z-range away, with a radius of 1.2 × d_max. The viewpoints are located at steps of 30^∘ on each circle. We set τ = 0.95 for all our experiments. § EXPERIMENTS AND EVALUATION In order to gauge our method's effectiveness, we compare it with a non-predictive multi-agent baseline and a prediction-driven NBV approach which was developed for a single agent. While the first highlights the benefits of including predictions in the NBV pipeline, the latter supports the argument for using a team of robots. §.§ Setup We extend the setup in Pred-NBV <cit.> to work in a multi-agent setting. Similarly, we use Robot Operating System (ROS) Melodic and AirSim <cit.> on Ubuntu 18.04 for our simulation experiments. Multiple UAVs are spawned into the AirSim environment. We equipped each of the UAVs with a depth camera and an RGB camera. Each UAV published a segmented image using AirSim's built-in segmentation. We adapted the depth segmentation package from Pred-NBV to work with multiple UAVs. We then converted these segmented depth images into 3D point clouds. For our collision-free point-to-point planner, we use the MoveIt <cit.> software package implementing the work done by Köse <cit.>. §.§ Qualitative Example We evaluate MAP-NBV on the same 20 objects that were used in Pred-NBV to allow a direct comparison. The 20 objects consist of 5 different ShapeNet classes: airplane, rocket, tower, train, and watercraft. Examples of each class are shown in Figure <ref>. These classes represent diverse shapes and infrastructures that are regularly inspected. Figure <ref> shows the path followed by 2 UAVs as given by MAP-NBV in the C-17 airplane simulation. This environment includes other obstacles that are not of interest but still need to be accounted for in collision-free path planning. MAP-NBV finds a collision-free path for both UAVs while targeting the maximum coverage of the C-17 airplane. §.§ Comparison with Single-agent Baseline We compared the performance of MAP-NBV with a single-agent prediction-based NBV planner called Pred-NBV <cit.>. MAP-NBV is an extension of Pred-NBV designed for multi-agent scenarios. However, in single-agent cases, both algorithms function identically. In MAP-NBV, UAVs are spawned close together, ensuring that the initial environment information is virtually the same as in the single-agent Pred-NBV case. Consequently, the initial points observed and the initial shape completion predictions for both algorithms are highly similar. This means that MAP-NBV and Pred-NBV select their initial NBVs using nearly identical information. To demonstrate the immediate information gain of MAP-NBV over Pred-NBV, we compare the number of points observed after navigating to the first NBVs selected by the algorithms. Our findings, presented in Table <ref>, reveal that, on average, MAP-NBV observes 22.75% more points after the first iteration compared to Pred-NBV in the context of object reconstruction. These results are based on evaluations across 20 objects and 5 object classes. Furthermore, on average, each UAV in MAP-NBV flew a similar distance to the UAV in Pred-NBV. This similarity arises from both algorithms generating candidate viewpoints in the same manner and employing the same point-to-point planner. §.§ Comparison with Multi-agent Baseline We also compared the performance of MAP-NBV with a modified baseline NBV method <cit.> designed for multi-agent use. The baseline method employs frontiers to select the next-best views. Frontiers are points located at the edge of the observed space near unknown areas. We utilized the same modifications described in Pred-NBV <cit.>. Specifically, we used our segmented point cloud to choose frontiers near the target object. To ensure that the UAVs always face the target object, the orientation of all poses selected by the baseline aligns with the center of the observed target object point clouds. We further adapted this baseline method to function in a multi-agent setting. The pose for the first UAV is selected in the exact same manner as in the single-agent baseline. For each subsequent UAV, the remaining best pose is chosen, as long as it does not fall within a certain distance threshold compared to the previously selected poses in the current iteration of the algorithm. Both MAP-NBV and the baseline algorithm employ the same stopping criteria. The algorithm terminates if the total points observed in the previous step exceed 95% of the total points observed in the current step. Our evaluation, presented in Table <ref>, demonstrates that MAP-NBV observes, on average, 15.63% more points than the multi-agent baseline for object reconstruction across all 20 objects from the 5 different model classes. In our simulations, we utilized 2 UAVs for both algorithms. Furthermore, the MAP-NBV algorithm can be readily extended to accommodate more than just 2 robots. By incorporating additional UAVs, the algorithm can effectively leverage the collaborative efforts of a larger multi-agent system to improve object reconstruction performance and exploration efficiency. However, in our current evaluation, we utilized 2 UAVs for both algorithms due to limited computational resources. The simulations were computationally intensive, and our computer experienced significant slowdowns with just 2 robots in the simulation. Despite this limitation, the promising results obtained with 2 UAVs suggest that scaling up the algorithm to include more robots has the potential to yield even more significant improvements in performance. Additionally, Figure <ref> illustrates that MAP-NBV observes more points per step than the multi-agent baseline while also covering a shorter flight distance. § CONCLUSION We present a multi-agent, prediction-guided NBV planning approach for active 3D reconstruction. This method can be helpful in a variety of applications including civil infrastructure inspection. We show that our method is able to faithfully reconstruct the object point clouds efficiently compared to non-predictive multi-agent methods and single-agent prediction-based methods. Our NBV planning objective considers both information gain and control effort, making it more suitable for real-world deployment given the flight time limit imposed on UAVs by their battery capacity. In this work, we focus solely on geometric measures for information gain. Many existing works on NBV have developed sophisticated information theoretic measures. We will explore combining both types of measures in our future work. Also, we consider all possible viewpoint pairs for finding the NBV for the team, which hinders the scalability of MAP-NBV. We will look into methods to make this process more computationally efficient search over a larger candidate viewpoint set. IEEEtran
http://arxiv.org/abs/2307.07271v1
20230714105312
Universal lower bound for community structure of sparse graphs
[ "Vilhelm Agdur", "Nina Kamčev", "Fiona Skerman" ]
math.CO
[ "math.CO", "cs.DS", "cs.SI", "math.PR" ]
language=matlab Universal lower bound for community structure of sparse graphs Vilhelm Agdur, Nina Kamčev and Fiona Skerman August 12, 2023 ============================================================== We prove new lower bounds on the modularity of graphs. Specifically, the modularity of a graph G with average degree d̅ is Ω(d̅^-1/2), under some mild assumptions on the degree sequence of G. The lower bound Ω(d̅^-1/2) applies, for instance, to graphs with a power-law degree sequence or a near-regular degree sequence. It has been suggested that the relatively high modularity of the Erdős-Rényi random graph G_n,p stems from the random fluctuations in its edge distribution, however our results imply high modularity for any graph with a degree sequence matching that typically found in G_n,p. The proof of the new lower bound relies on certain weight-balanced bisections with few cross-edges, which build on ideas of Alon [Combinatorics, Probability and Computing (1997)] and may be of independent interest. § INTRODUCTION In numerous real-world examples of graphs, we anticipate a certain community structure – for instance, people form friend groups, neurons cluster into functional units, academic papers divide into subfields. To infer this structure from graph data, several metrics have been proposed to evaluate the quality of vertex partitions. One of the most widely used metrics is modularity, introduced by Newman and Girvan <cit.>. Each vertex partition of a graph is given a modularity score q_(G), with higher scores taken to indicate that a partition better captures the community structure of a graph. In practice, for large networks, community detection is performed through algorithms that iteratively try to optimise this score <cit.>, such as the Louvain <cit.> or Leiden <cit.> algorithms. The modularity score of a partition of a graph G with m edges is given by q_(G) = ∑_A∈e(A)/m - ((A)/2m)^2 where the sum runs over parts A ⊆ V(G) of the partition, e(A) is the number of edges within part A, and the volume (A) is the sum of the degrees of the vertices in part A. As can be seen, the formula for modularity consists of two terms balancing against one another – one which partitions with edges within the parts, and a sum of squares term that rewards having many small parts, or for a fixed number of parts rewards parts of approximately equal volume. We call these terms the coverage or edge contribution and the degree tax respectively. The score of any partition is between -0.5 and 1 <cit.>. The modularity of a graph, (G), is the maximum of q_(G) over all partitions of G. It is easy to see that the partition that puts all vertices in the same part gets a score of exactly zero, so that (G) is always between zero and one. While this gives us a reasonable way of comparing two different partitions of the same graph and telling which is better, it does not immediately give us a way of taking a graph and telling if it has a significant community structure or not. If you are given a graph, and compute that its modularity score is 0.23, does this mean the graph has or does not have a community structure? One might initially hope that random graphs with no underlying structure would have modularity essentially zero, but this turns out not to be true, at least if the graph is sparse, which many real-world graphs are. The binomial random graph, G_n,p, is likely to have high modularity so long as the average degree is bounded <cit.>. As proved by Ostroumova, Prałat and Raigorodskii <cit.>, any graph with maximum degree o(n) and average degree d̅ has modularity at least about 2d̅^-1. For random graphs with a given bounded degree sequence (unders some natural assumptions), this lower bound was improved to (2+)d̅^-1 by Lichev and Mitsche <cit.>. We give another result in this direction, showing that any graph whose degree sequence is not too heavy-tailed will have modularity Ω(d̅^-1/2). One motivation for this result are applications to graphs with a power-law degree sequence, and specifically, to preferential-attachment graphs, which are discussed in Section <ref> extending a result of <cit.>. The following statement is a concise version of the result as n →∞ (so o(1)-terms are with respect to n), whereas the error terms are stated explicitly as Proposition <ref>. Let G be an n-vertex graph with average degree d̅≥ 1, L = {v ∈ G d_v < C d̅} for some C > 1, and assume that (L) ≥ (1 + γ) m = (1 + γ)n d̅/2 for some γ >0. If Δ(G) n^-1= o(1) and d̅ ^10n^-1=o(1), then (G)≥0.26 γ/√(C d̅)(1+o(1)). One way to interpret the result is that, assuming d̅=o(n^1/10), the only obstruction to modularity Ω(d̅^-1/2) is if we have a minority of vertices which contain at least half the volume of the graph. This happens for unbalanced bipartite graphs, as discussed below. The Ω( d̅^-1/2) is the best lower bound we could hope for without imposing more conditions, because there exist families of graphs which achieve this bound. For example, d-regular graphs G_n,d, for large enough d, have modularity (G_n,d)=Θ(d^-1/2) <cit.>. Another example are Erdős-Renyí random graphs, see the following section, as well as the Chung-Lu model, see Section <ref>. Modularity from fluctuations in random graphs? Or automatically by average degree? Guimerà, Sales-Pardo and Amaral published a highly influential paper showing the binomial random graph G_n,p can have high modularity <cit.>. They estimated it to have modularity Θ((np)^-1/3), meaning the modularity does not go to zero for constant average degree, the usual regime of interest for real networks. Using deep but non-rigorous insights from statistical physics Reichardt and Bornholdt <cit.> conjectured the modularity of G_n,p to be Θ((np)^-1/2) whp, and this was confirmed to hold whp for 1/n ≤ p ≤ 0.99 in <cit.>. Notice that this matches the bound in Theorem <ref>. To be precise since the average degree of G_n,p is tightly concentrated about (n-1)p=np(1+o(1)), <cit.> implies that for 1/n ≤ p ≤ 0.99 and for G∼ G(n,p) whp we have (G)=Θ(d̅(G)^-1/2 ). Thus, our result shows that these lower bounds on the modularity of G_n,p hold simply because of its average degree and the well-behaved nature of its degree sequence, without needing any appeal to fluctuations or any other particular feature of the model – the same bound holds for any graph with a similar degree sequence. Furthermore, our lower bounds offer a certain level of validation to the concept of modularity as a measure for community structure. It would be less than satisfactory if there existed graphs with considerably lower modularity than a random graph with a similar degree sequence. Our results, on the other hand, imply that random graphs do, in a sense, have the minimum achievable modularity. Modularity results The modularity of graphs from random graph models and the relationship between graph properties and modularity has received much recent attention. We have mentioned already the deterministic lower bound of 2d̅^-1 for graphs with sublinear maximum degree <cit.> and results for random regular <cit.> and Erdős-Rényi graphs <cit.>. For random cubic graphs Lichev and Mitsche <cit.> proved the modularity whp lies in the interval [0.667, 0.79] and more generally that random graphs with a given degree sequence have modularity at least (2+)d̅^-1. Preferential attachment graphs with h≥ 2 edges added at each step whp have modularity at most 15/16 and at least Ω(d̅^-1/2) <cit.>, see also the next section. Sampling a random subgraph of a given graph by including each edge independently with probability p whp has modularity approximating that of the underlying graph for p such that the expected degree in the sampled graph is at least a large constant <cit.>. There are also graphs known to be `maximally modular' <cit.>, i.e. with modularity tending to 1 as the number of edges tends to infinity. It has been shown that graphs with sublinear maximum degree from a minor-closed class <cit.> and (whp) hyperbolic graphs <cit.> and spatial preferential attachment <cit.> are maximally modular. In another direction, see <cit.> for a geometric interpretation of modularity. Tightness of Theorem <ref> - complete bipartite and random graphs. The result is tight in two senses. Firstly, the γ>0 condition is necessary and secondly the Ω(d̅^-1/2) is the best lower bound we could hope for without imposing more conditions. To see that γ>0 is necessary, we note that for any even d we may construct a graph G with average degree about d with (G)=0 and such that (L)=(G)/2. It is known that complete bipartite graphs have modularity zero <cit.>. Take the complete bipartite graph G=K_d/2, t, and note the average degree is d̅=dt/(d+t) and thus for sufficiently large t we have d̅≈ d. The graph has many vertices with degree d/2, few vertices of degree t and both sets have volume (G)/2. Thus the set L in the theorem statement will consist of vertices of degree d/2 and have volume (G)/2, yielding γ=0. To see that Ω(d̅^-1/2) is the best lower bound possible without imposing more conditions, we recall that a random d-regular graph has modularity at most 2d^-1/2 <cit.>. Moreover, Corollary <ref> gives examples of graphs with a large family of possible degree sequences and modularity Ω(d̅^-1/2). §.§ Application to power-law graphs In this section, we discuss two applications of Theorem <ref>. Informally, graphs with a power-law degree sequence and preferential-attachment graphs have modularity Ω(d̅^-1/2), generalising a result of <cit.>. Many real-world graphs follow a power-law degree distribution, for instance the World Wide Web, genetic networks and collaboration networks <cit.>. This means that the proportion of vertices of degree k is O(k^-τ) for a parameter τ, called the shape coefficient. Most examples found in literature have the shape coefficient τ in the interval (2, 3] – for example roughly 2.2 for the Internet or 2.3 for the movie actors network <cit.>. For τ >2, that is, as soon as the first moment of the sequence is well-defined, most of the volume is on the vertices whose degree is near average. This allows us to apply Theorem <ref> and obtain the following lower bound. Let G be a graph with degree sequence = (d_i)_i ∈ [n], with average degree d̅, satisfying 1/n |{i: d_i ≥ k }| ≤ A d̅^τ -1 k^1-τ for all i, with constants τ >2 and A >0. For b = 0.1 ( (τ-2)/8A)^1/2(τ -2) and sufficiently large n, (G) ≥ b d̅^-1/2. As mentioned, the best previously known lower bound for the modularity of graphs satisfying <ref> is 2d̅^-1 <cit.> - since their max degree is sublinear. The modularity was already known to be Ω(d̅^-1/2) for preferential attachment graphs (with δ=0) <cit.> so our Theorems <ref> and <ref> generalise this. Modularity of random power-law graph models There are numerous random graph models which aim to model existing networks with a power-law degree distribution (often referred to as scale-free networks). They fall into two basic categories, * graphs whose degree sequence is specified a priori, and * graphs in which the degrees emerge from stochastic local growth rules, such as preferential-attachment graphs. For (i), a lower bound on the modularity follows directly from Theorem <ref>, assuming that the empirical degree sequence is close to the prescribed one. Notice that this holds by definition for random graphs with a given fixed degree sequence, so Theorem <ref> trivially applies to the uniform model, extending the results of <cit.> for G_n,d. It also implies a lower bound for the Chung-Lu model for graphs with a given expected degree sequence. For the Chung-Lu model, we are also able to give a matching (up to constant factor) lower bound on the modularity, see Section <ref>. The models from category (ii) are usually based on the preferential attachment model (PAM) <cit.>, which is described in more detail in Section <ref>. The preferential attachment model was introduced in a seminal paper by Albert and Barabási <cit.>, which also demonstrated its ability to explain the emergence of scale-free networks and laid the foundation for the study of complex networks. For more information on mathematical properties and applications of the PAM, see for instance <cit.>. For PAM-type models (ii), it is not easy to prove rigorous results about the degree sequence, and controlling high-degree vertices seems particularly inaccessible (see, e.g., <cit.>[Section 2.2] and Proposition <ref> <ref> in this paper). For this reason, Theorem <ref> does not apply directly, but we demonstrate how Theorem <ref> can be applied to the class of preferential attachment models presented in Section <ref>. §.§ Key Techniques Alon's bisection method Throughout this section, G is a given graph with average degree d̅ and we wish to find a bisection of G with modularity Ω(d̅^-1/2). Central to our proof is the method of Alon <cit.> which gives a bisection of the graph with n (d̅/4 - Ω(d̅ ^1/2)) edges between the two parts. Notice that the second term Ω(d̅^1/2) is the deviation from a random bisection. A crucial idea in this method is to find a pairing of the vertices which does not interfere with the edges of the graph in undesired ways, so that a randomised bisection of the vertices along those pairs can be analysed (see Lemma <ref>). In obtaining bisections with high modularity, we face two obstacles – the degree tax, and the fact that Alon's bisection technique only applies to graphs with maximum degree O(n^1/9). Pairings which equalise the volume Regarding the degree-tax obstruction, by definition (<ref>), if a bisection obtained above is to have high modularity, it needs to have degree tax as small as possible, i.e. the two parts need to have approximately the same volume to give degree tax near a half (see definition peq:defmod). Thus our problem is to find a bisection of G with the same guarantee of few edges between parts, but also such that the volume of the two parts is similar. The technical result allowing us to partition a graph while controlling the volume is Lemma <ref> where we find a pairing of almost all vertices such that the vertices of each pair are near in the degree-ordering of the vertices, but the pairing is still suitable for Alon's bisection technique to apply. This together with a load-balancing result Lemma <ref> yields Theorem <ref> – informally, high modularity for graphs with maximum degree o(n^1/9). Processing high-degree vertices However, the constraint that Δ(G) = o(n^1/9) is still too strong for many desired applications – for instance, graphs with a power-law degree sequence often have a significantly higher maximum degree (see Section <ref>). To circumvent this problem, we essentially apply the bisection method as above to the bulk of the graph, that is, to the vertices whose degrees are not too far above the mean which we denote by L. Then we randomly divide H=V(G)∖ L, the high-degree vertices, into our two parts. With positive probability, such a partition will have modularity Ω(d̅^-1/2) – the main contribution to positive modularity will come from partitioning L, and for H, we only need to show that they behave approximately as expected, even with respect to the previously found partition of L. § WEIGHT-BALANCED BISECTIONS We now describe the bisection idea due to Alon <cit.>. The method starts from a convenient matching on V(G), which we now define. Given a graph G = ([n], E(G)) and a matching M disjoint from E(G), a short loop of G and M is a loop of length at most twelve containing between one and three edges from M and never more than three consecutive edges of G. Note that in particular, this definition implies that M and G are edge-disjoint. Given such a matching M, Alon proposed and analysed a simple randomised algorithm which splits the vertices of the graph G `along' M. We only describe the idea informally as we will not explicitly use it in this paper, it will suffice to use the result Theorem <ref> as a black-box. The first step is to orient the edges of M independently and uniformly at random, which splits the vertex set into the set of sources and sinks in this orientation. An edge uv of M is marked active if reorienting uv would not increase the number of `cross-neighbours' of both u and v in the opposite part. The second step is to uniformly resample the orientations of the active edges, and to output the induced partition. This partition is shown to have very few cross-edges with positive probability, and the requirement for no short loops is important in the analysis. Below we state the result in a self-contained form. In <cit.>, the computations are carried out for d-regular graphs, but the argument covers arbitrary degree sequences verbatim, and this is also stated in the concluding remarks in <cit.>. Throughout the paper, c = 3/8√(2)≈ 0.265 is a fixed constant. Given any graph G, and any perfect matching M on [n] disjoint from E(G) such that there exist no short loops of G and M, there exists a U ⊂ [n] such that M ⊂ U × U^c, and e_G(U, U^c) ≤1/2∑_i=1^n d_i (1/2 - c/√(d_i)). It is convenient to identify the vertex set of our graphs with [n], since this gives a natural ordering on the vertices of G. Later, we will choose a specific vertex ordering to which the following lemma will be applied. Given any graph G = ([n], E(G)) with maximum degree Δ>1, there exists a partial matching M on [n] disjoint from E(G) such that the following holds: 1. For any vw ∈ M, v-w≤. 2. There are no short loops of G and M. 3. [n-] ⊂ V(M) Note that the last statement in particular means the lemma is void when n <. Let H = G^3, the graph where there is an edge (u,v) if there is a path of length at most 3 from u to v in G. It is straightforward to verify that our condition 2 on the matching M is implied by 2'. There does not exist any cycle of length two, four or six consisting of alternating edges from H and M. (In particular, H and M are edge-disjoint.) Recalling that the maximum degree of G is Δ>1, we have that the maximum degree of H is Δ(H) ≤Δ + Δ(Δ-1) + Δ(Δ-1)^2 ≤Δ^3 - 1. Intuitively, the idea of the proof is to construct the matching greedily, taking the smallest currently unmatched vertex v and joining it to the first available vertex. It will be enough to show that until the very last rounds, the number of unavailable vertices will be not too large. Vertices are made unavailable when they are already incident to edges in the matching or when there is a particular dangerous configuration of alternating edges (to ensure we do not violate property 2'). Loosely speaking, we maintain an upper bound on the number of unavailable vertices using property 1, which guarantees that the matched vertices are not too far from one another, and is established at the start of each step. We construct this matching M using the following greedy algorithm, see Figure <ref>. We identify the graphs H and M with their edge sets. For a matching M, we write V(M) for the set of vertices incident to an edge of M. We will first show that the argument terminates, i.e. that a suitable vertex w can be found as long as v ≤ n -, and then that the resulting matching M_n- + 1 has the desired properties. Claim: Let M_v^+=V(M_v) ∩{v+1, …, n }. If v ∉ V(M_v) then |F_v^+∪ M_v^+| ≤-1. To begin the proof of the claim, similarly to F_v^+, define F_v^- to be the set of x ∈{1, … v-1} such that there is a path between x and v of the form H, HM_vH or HM_vHM_vH and define F_v=F_v^+ ∪ F_v^-. Let u ∈ M_v^+, and note we have uw ∈ M_v for some w<v. But given the greedy algorithm to construct M, and v<u, this implies that v was not a valid choice for w to pick, that is, it must be the case that v ∈ F_w^+ ∪ V(M_w). Since M_w ⊆ M_v and v ∉ V(M_v), this implies v ∈ F_w^+. Thus there is a path between w and v of the form H, HM_wH or HM_wHM_wH and so w∈ F_v^-, again using M_w ⊆ M_v. In particular, we have shown that for each u ∈ M_v^+ there is a distinct w ∈ F_v^- and hence |F_v^+∪ M_v^+|≤|F_v|. The number of vertices u incident to v is at most Δ(H). Similarly, the number of paths starting from v of the form H M_v H is at most Δ(H)^2 and of the form H M_v H M_v H is at most Δ(H)^3. Thus |F_v|≤Δ(H)+Δ(H)^2+Δ^3 ≤ (Δ(H)+1)^3 -1 ≤ -1 and by (<ref>) we have shown the claim. The claim above implies that {v+1, …, v + }∖ (V(M_v) ∪ F_v) ≠∅ for v ≤ n-, so indeed, there is a valid choice for w in each step, and the algorithm terminates. Let M= M_n- + 1. For each vertex v, V(M_v+1) contains the initial segment [v], so in particular, M contains [n- ], certifying property 3. Finally, let us show that M_v satisfies condition 2' for each v. This clearly holds for M_1 =∅, and suppose that some M_v satisfies condition 2'. If M_v+1 = M_v, M_v+1 clearly satisfies condition 2'. Moreover, if M_v+1 = M_v ∪{vw}, then vw does not close an alternating cycle of length 2, 4 or 6 because w does not lie in F_v^+. In either case, M_v+1 satisfies condition 2', as required. In particular, M_n- Δ^9+1 satisfies condition 2', which completes the proof. We shall use the following load-balancing result to show that the two parts of our partition have similar volume - see <cit.>[Lemma 2.2] or the thesis <cit.>[Lemma 2.1.3]. Suppose that f: [n] → is some non-increasing function. Then, for any perfect matching M on [n] such that for every (i,j) ∈ M, i-j≤ L, and for any orientation of the edges of M, it holds that | ∑_(i,j) ∈ M f(i) - f(j) | ≤ L|f(n) - f(1)|. We can assemble these lemmata into the following proposition. Let G = ([n], E) be a graph with maximum degree Δ satisfying Δ^9 ∈[1,n/2), and let w_max = w_1 ≥…≥ w_n=w_min be non-negative vertex weights. Let w̅ be the average vertex weight, w̅ = 1/n∑_u w_u, and for a set A of vertices, let w(A) = ∑_u∈ A w_u. There exists a partition {A, B, R} of V(G) such that * A = B, * R ⊂{n - Δ^9 + 1, …, n}, * e(A,B) ≤1/2∑_v ∈ A ∪ B^n d_v (1/2 - c/√(d_v)) , * w(A) - w(B)≤Δ^9(w_max - w_min) and * max_v ∈ R w_v ≤ 2w̅. We may apply Lemma <ref> to our graph – recall that we have assumed our weights w_i are decreasing, which will imply that the unmatched vertices will be the ones of lowest weight. We obtain a matching M consisting only of edges ij with i - j < Δ^9. Let R be the set of vertices not matched by M. Lemma <ref> also tells us that R≤Δ^9, and in fact, R is contained in the final segment {n - Δ^9 + 1, …, n}. Let G' = G ∖ R. The graph G' along with the matching M fulfils the conditions of Theorem <ref>, by the construction of M. Hence, we obtain a set U ⊆ [n] such that M ⊆ U × U^c, and e_G'(U, U^c) ≤1/2∑_v ∈ G' d^G'_v (1/2 - c/√(d^G'_v)) ≤1/2∑_v∈ G'^n d_v (1/2 - c/√(d_v)), where the second inequality follows from c < 1/2. Since i - j < Δ^9 for all edges ij of M, we can appeal to Lemma <ref> and get a bound on the difference of the weights of the sets, namely w(A) - w(B)≤Δ^9(w_max - w_min) as desired. Finally, let us bound the weights of the unmatched vertices. We established that the remainder R will be among the Δ^9 vertices of lowest weight. Suppose for contradiction that max_v∈ R w_v is larger than 2w̅ – then, by the ordering of the vertices, all the vertices 1, …, n-Δ^9 must have weight at least 2w̅. So we can compute w̅ = 1/n∑_i=1^n w_i ≥1/n∑_i=1^n-Δ^9 w_i ≥n - Δ^9/n2w̅ = (1 - Δ^9/n)2w̅, and the fact that 1 - Δ^9/n > 1/2 follows from our assumption that Δ^9 < n/2, giving us the desired contradiction. § FROM WEIGHT-BALANCED PARTITIONS TO MODULARITY BOUNDS We now pivot back to considering modularity in particular, as our objective measure on partitions. We start by showing a weaker version of our main theorem, that only applies under the assumption of a maximum degree bound, in order to illustrate the proof in a simpler setting. Then, to prove the main theorem, the main idea is essentially to apply Theorem <ref> to the bulk of the graph, that is, to the vertices whose degrees are not too far from the mean (and so we have a bound on the max degree in this bulk), and then randomly divide the high-degree vertices of the graph into our two parts. Thus the main term of our main theorem is essentially the same as the main term of Theorem <ref>, because this bulk is where the main terms is gained; for the high degree vertices, we merely take a partition that yields roughly the expected number of cross-edges and does not interfere with the previous partition. For technical reasons, it makes more sense to give a standalone proof of our main theorem that does not directly appeal to the following weaker theorem. We nevertheless include this theorem because its proof highlights the ideas at play without as many details to obscure them. §.§ Modularity bounds - an easy application of weight-balancing. The following theorem will follow quickly from our weight-balancing result, Proposition <ref>. For any graph G such that Δ^9 ∈[1,n/6), we have (G) ≥c/n∑_i=1^n √(d_i)/d - Δ^20/2(nd̅)^2 . To prove the theorem we will use Proposition <ref>, by taking the vertex degrees as the weights, which gives us a volume balanced nearly-bisection: two large sets A and B with similar volumes and a small remainder set R. The modularity score of the partition into these three sets will be high if the number of edges between A and B is significantly less than half the edges of the graph, the volumes of A and B are sufficiently similar and R has a sufficiently small volume. The following lemma makes this precise. Let G be a graph and ={A,B,R} a vertex partition of G with (R)≤(G)/3. Then q_(G) ≥ 1/2 - e(A,B)/e(G) - ((A)-(B))^2/2(G)^2 The proof of Lemma <ref> is straightforward and so we defer the details to the appendix, see page lem.nearlybisectionagain. Since G satisfies Δ^9 ∈[1,n/6), it follows from Proposition <ref>, taking our vertex weights to be the degrees of the vertices, that there exists a partition {A, B, R} of the vertices of G with A = B and such that e_G(A, B) ≤1/2∑_i=1^n d_i (1/2 - c/√(d_i)), _G(A) - _G(B)≤Δ^9(Δ - δ) ≤Δ^10 and the remainder R satisfies R≤Δ^9 and max_v∈ R d_v ≤ 2d̅. We now prove a lower bound on the modularity score of the partition {A, B, R} and hence a lower bound on the modularity value of G. Recalling that ∑_i d_i = 2e(G) and e(G)=nd̅/2 the bound (<ref>) gives e_G(A, B) ≤e(G)/2 - c/2∑_i=1^n √(d_i) = e(G)( 1/2 - c/n∑_i=1^n √(d_i)/d̅). To bound the volume of R (R) ≤ |R| max_v ∈ R d_v ≤ 2 Δ^9 d̅, and thus for Δ^9 ≤ n/6, we have (R)≤(G)/3 and so can apply Lemma <ref>. Now substituting the bounds in (<ref>) and (<ref>) into Lemma <ref> gives the desired result. §.§ Modularity bounds without the max degree condition In this section we prove our main theorem. The idea here is that we can apply the same method as we did for Theorem <ref> for the bulk of the graph, and then deal with the high-degree vertices separately. By doing so, we can remove the condition on the maximum degree, instead replacing it with a mild condition on the upper tail. The proof still uses the weight-balanced bisection with few edges across the parts to gain its main term, however we will now apply it just to a subgraph G[L], where L is the set of vertices whose degree is at most a constant multiple of the average degree. We then assign the vertices in [n]∖ L randomly to the two parts of our partition, using one method for vertices with degree at most √(n) and one for the rest. Unlike in the bulk, where the weight-balanced bisection actually gains us our main term, in this part we can only hope to keep the additional error terms small, since our random assignments do not really use the structure of the graph. Note the following simple expression for two-part modularity (which follows for example from Lemma <ref> by taking R to be the empty set). For any graph G and partition {A, B} of its vertices, we have q(G, {A, B}) = 1/2 - e(A,B)/m - ((A) - (B))^21/8m^2 We are now ready to prove Theorem <ref>, which we restate here as a proposition with explicit error terms. Let G be an n-vertex graph with average degree d̅≥ 1, L = {v ∈ G d_v < C d̅} for some C > 1, and assume that (L) ≥ (1 + γ) m = (1 + γ)n d̅/2 for some γ >0. If ϑ = (C d̅)^10n^-1 < 1/2(1 - 1/C), then (G) ≥0.26/√(C d̅)(γ - 2 ϑ/C d̅) - ϑ^2/2 d̅^2 - 3/8√(n)-4Δ(G)^2/n^2 d̅^2. Moreover, if Δ(G)/n = o(1) and ϑ=o(1), then (G)≥0.26 γ/√(C d̅)(1+o(1)). To prove the bound, we randomly construct a bipartition with expected modularity score at least as claimed, and thus conclude that there exists a bipartition achieving at least that score. As in Theorem <ref>, we use the weight-balancing result, Proposition <ref>, this time applying it to just the low-degree vertices, L, to get a partition into {A, B, R}. For the random partitioning step, we take the remainder U=L ∪ R and randomly divide it into two parts U_A and U_B. (Here, we break into vertices U^+ with degree at least n^1/2 and the remainder U^- and have slightly different procedures for U^+ and U^-.) Let G' = G[L] and S = V(G)∖ L. Given a vertex v ∈ L, let d_v' be the degree of v in G', that is, its number of neighbours in L. We will apply Proposition <ref> to the graph G', using the degrees d_v' as our vertex weights. This will require us to bound the maximum degree in G' in terms of the number of vertices of G', that is, in terms of L. Observe that L = n - S > n - n/C = n(1 - 1/C) by Markov's inequality, and so we get that (max_v ∈ G' d_v')^9 ≤(Cd̅)^10 < (1 - 1/C)n/2 < L/2 where we, for the first inequality, used that the maximum degree of G' is at most Cd̅ by construction, and the second inequality follows from our assumption that ϑ = (C d̅)^10n^-1 < 1/2(1 - 1/C). Thus G' will satisfies the condition of Proposition <ref>, and we get a partition {A,B,R} of V(G'), and thus a partition {A,B,R,H} of the vertices of G. Since we cut off the vertices of the highest degree, we get the following guarantees on this partition: * A = B, * R < (Cd̅)^9, * e(A,B) ≤1/2∑_v ∈ A ∪ B d_v' (1/2 - c/√(d_v')), * (A) - (B)≤ (Cd̅)^9(Cd̅ - δ) ≤ (Cd̅)^10, * max_v ∈ R d_v ≤ 2d̅. Our strategy will be to divide the vertices we do not have degree bounds for – the ones in H and R – randomly into A and B, and use this randomness to control their contribution to the modularity. As before, the positive contribution to the modularity score will come the fact that there are relatively few edges between A and B. Let U = H ∪ R, and let {U_A, U_B} be a partition of U. We will first perform some general computations, and then just after (<ref>) we specify the random procedure to partition U into {U_A, U_B}. By Remark <ref>, we see that q(G, {A ∪ U_A, B ∪ U_B}) = 1/2 - e(A ∪ U_A, B ∪ U_B)/m - ((A ∪ U_A) - (B ∪ U_B))^2/8m^2. Now substituting m = e(A ∪ B) + e(U) + e(A ∪ B, U) and e(A ∪ U_A, B ∪ U_B)=e(A,B) + e(U_A, B) + e(A, U_B) + e(U_A, U_B) we can compute that 1/2 - e(A ∪ U_A, B ∪ U_B)/m = e(A∪ B) - 2e(A,B)/2m + e(U) + e(A∪ B, U)/2m - e(U_A,U_B) + e(U_A, B) + e(A, U_B)/m. Then, we observe that ((A ∪U_A) - (B∪ U_B))^2 = ((A) - (B))^2 + 2((A)-(B))((U_A)- (U_B)) + ((U_A) - (U_B))^2 and so, taking this and our previous computations we have the following expression for the modularity score q(G, {A ∪ U_A, B ∪ U_B}) = e(A ∪ B) - 2e(A,B)/2m - ((A) - (B))^2/8m^2 + e(U) + e(A∪ B, U)/2m - e(U_A,U_B) + e(U_A, B) + e(A, U_B)/m - ((A)-(B))((U_A) - (U_B))/4m^2 - ((U_A) - (U_B))^2/8m^2 and thus we have five different terms which we will consider in turn. Firstly, for (<ref>), we use that e(A,B) ≤1/2∑_v ∈ A ∪ B d_v' (1/2 - c/√(d_v')) = 1/2e(A ∪ B) - c/2∑_v∈ A ∪ B√(d_v') and so (<ref>) is bounded below by c/2m∑_v∈ A ∪ B√(d_v'). For (<ref>) we use our bound on (A) - (B) to see that ((A) - (B))^2/8m^2≤(Cd̅)^20/8m^2. Now, upon reaching the terms that involve U_A and U_B, we specify how these sets are chosen. 4 Random procedure for choosing U_A and U_B (see also Figure <ref>). Let U^+ ⊆ H ⊆ U be the set of vertices of degree at least n^1/2 in G, with potentially one vertex less to make U^+ even. Firstly, pick a perfect matching ℳ on U^+, matching the highest-degree vertex to the second-highest-degree, the third highest to the fourth, and so on. Secondly, for each edge xy ∈ℳ, choose uniformly at random whether to put x in U_A and y in U_B, or vice versa. Thirdly, the vertices of U^-= U ∖ U^+ get placed into U_A or U_B independently at random with probability 1/2. We emphasise that U^- might contain one vertex ν with d_ν≥ n^1/2, and the remaining vertices have degree at most n^1/2. Note that, since d_v ≥ n^1/2 for v∈ U^+, we have n^1/2|U^+| ≤(U^+). Moreover, U^+⊆ H and so (U^+)≤ m and thus U^+≤ m/n^1/2. Hence |E(G)∩ℳ| ≤|U^+|/2≤m/2n^1/2. Having defined our choice of U_A and U_B, we can compute the expectation of the remaining terms of the modularity score, i.e. (<ref>)-(<ref>). Starting with the random part of (<ref>), we get e(U_A,B) = 𝔼[ ∑_ ub ∈ E(U,B)u ∈ U_A] = 1/2e(U,B) and likewise e(A, U_B) = 1/2e(A,U). Moreover, since for x, y ∈ U that are not matched by ℳ, their assignment to parts is independent, while for xy ∈ M' the endpoints x and y are deterministically assigned to different parts, we have e(U_A,U_B) = 12| E(U) \ℳ| + |E(U) ∩ℳ| = 12e(U) + 12|E(U) ∩ℳ| ≤1/2e(U) + m/4n^1/2. In total, the expectation of the random part of (<ref>) is bounded below by e(U,B) + e(A,U) + e(U) /2m + 1/4n^1/2 which then cancels nearly exactly with the deterministic first half, leaving us with a lower bound on the expectation of (<ref>) of form - 1/4n^1/2 . For (<ref>), we compute that (U_A) - (U_B) = ∑_v∈ U_A d_v - ∑_v∈ U_B d_v = ∑_v∈ U d_v(v ∈ U_A - v ∈ U_B)=0. and thus (<ref>) term has expectation zero. Finally, for (<ref>), writing U_A^+=U_A ∩ U^+ and U_A^-=U_A ∩ U^-, and defining U_B^+, U_B^- similarly, we first note ((U_A) - (U_B) )^2 = ( (U_A^+) - (U_B^+) + (U_A^-) - (U_B^-))^2 ≤ 2 ( (U_A^+) - (U_B^+) )^2 + 2 ( (U_A^-) - (U_B^-) )^2. For the first term of (<ref>), Lemma <ref> (the load balancing lemma) gives the deterministic bound | (U_A^+) - (U_B^+)| ≤Δ, where Δ is the maximum degree of G, since we may take L=1 in the application of Lemma <ref>. The contribution of U^- is ((U_A^-) - (U_B^-))^2 = 𝔼[ (∑_v ∈ U^- d_v^2 (v ∈ U_A - v ∈ U_B) )^2] = ∑_v ∈ U^- d_v^2 ≤ d_ν^2 + n^1/2∑_v ∈ U^- d_v ≤Δ^2 + n^1/2m; recalling that d_ν comes from a potentially unmatched high-degree vertex ν. Thus by (<ref>) and using m ≥ n, we conclude the expected value of (<ref>) is at least -((U_A) - (U_B) )^2 /8m^2≥ -Δ^2/m^2 - n^1/2/8m≥ -Δ^2/m^2 - 1/8n^1/2. We may take an instance of the random partition, Ũ_A and Ũ_B say, for which the modularity score of q(G, A ∪Ũ_A, B ∪Ũ_B) is bounded below by our bound on the expectation of the modularity score of the random partition. Gathering our calculations - terms (<ref>) and (<ref>) are bounded in (<ref>) and (<ref>), and the expectation of terms (<ref>)-(<ref>) are bounded by line (<ref>), 0 and line (<ref>) respectively. Thus, q(G, {A ∪Ũ_A, B ∪Ũ_B}) ≥c/2m∑_v ∈ A ∪ B√(d_v') - (Cd̅)^20/8m^2 - 3/ 8n^1/2 - Δ^2/m^2. It remains to simplify the lower bounds in (<ref>). We have c/2m∑_v ∈ A ∪ B√(d_v') ≥c/2m∑_v ∈ A ∪ Bd_v'/max_w ∈ A∪ B√(d_w') = c_G'(A ∪ B)/2m(max_v ∈ A∪ B√(d'_v)) ≥c_G'(A ∪ B)/2m√(C d̅). We now consider _G'(A ∪ B). Recall that we defined G' = G[L], and L = A∪ B ∪ R. We compute that _G'(A ∪ B) = _G'(G') - _G'(R) = _G(L) - e_G(H,L) - _G'(R) ≥_G(L) - _G(H) - _G(R) and so since _G(L) = (1+γ)m and _G(H)=(1-γ)m by assumption, we get that _G'(A ∪ B) ≥ 2γ m - _G(R). Moreover, _G(R)≤ |R|max_v∈ R d_v ≤ (Cd̅)^9· 2 d̅ = 4m(Cd̅)^9 / n by Proposition <ref>. Substituting this into (<ref>), we get that c/2m∑_v ∈ A ∪ B√(d_v')≥c_G'(A ∪ B)/2m√(C d̅)≥cγ/√(Cd̅) - 2c (C d̅)^9/n√(Cd̅). Hence, (<ref>) implies that q(G, {A ∪Ũ_A, B ∪Ũ_B}) ≥cγ/√(Cd̅) - 2c (C d̅)^9/n√(Cd̅) - (Cd̅)^20/8m^2 - 3/8n^1/2 - Δ^2/m^2, and it remains to express two of the error terms in terms of ϑ = (Cd̅)^10n^-1. Namely, 2c (C d̅)^9/n√(Cd̅) = 2c ϑ/(C d̅)^3/2 and (Cd̅)^20/8m^2 = ϑ^2 n^2/2n^2 d̅^2 = ϑ^2/2 d̅^2. Gathering terms and simplifying, we get the final form of our theorem, stating that (G) ≥ q(G, {A ∪Ũ_A, B ∪Ũ_B}) ≥c /√(C d̅)(γ - 2 ϑ/C d̅ - ) - ϑ^2/2 d̅^2 - 3/8n^1/2-Δ^2/m^2. as desired. Recall that c >0.26. It follows that if Δ/m = o(1) and ϑ=o(1), then (G)≥0.26 γ/√(C d̅)(1-o(1)). The following proposition may be useful to get better constants in some situations, mainly because we do not lose the 1/ √(C) in the main term. Let G be an n-vertex graph with average degree d̅, and let L = {v ∈ [n]: d_v ≥ C d̅} for some constant C ≥ 2. Let (d_v')_v ∈ L be the degree sequence of G[L], and ϑ:=(C d̅)^10n^-1<1. Then (G) ≥c/2 n d̅∑_v ∈ L√(d_v') -ϑ^2/2 d̅^2 - 3/8n^1/2 - Δ^2/4(n d̅)^2. We follow the proof of Theorem <ref> (with the same notation) down to (<ref>). Recalling that L = A ∪ B ∪ R, we have q(G, {A ∪ U_A, B ∪ U_B}) ≥c/n d̅∑_v ∈ A ∪ B√(d_v') - (Cd̅)^20/2(n d̅)^2 - 3/8n^1/2 - Δ^2/4(n d̅)^2. It remains to compare ∑_v ∈ A ∪ B √(d_v') with ∑_v ∈ A ∪ B ∪ R√(d_v'). To this end, note that |R| ≤ (C d̅)^9 ≤n/4≤|A ∪ B ∪ R|/2, and that d_w' ≤ d_v' for all w ∈ R and v ∈ A ∪ B. Therefore, ∑_v ∈ A ∪ B √(d_v')≥∑_v ∈ R√(d_v'), so ∑_v ∈ A ∪ B √(d_v')≥1/2∑_v ∈ A ∪ B ∪ R√(d_v'), which together with (<ref>) yields (<ref>). § LOWER BOUNDS FOR POWER-LAW GRAPHS We will now apply our main result to deduce Theorem <ref>, as well as a more general lower bound in terms of the moments of the degree sequence. Notice that although Theorem <ref> is stated for constant d̅, the bound actually holds for mildly increasing d̅, up to d̅ = n^o(1). Let G be a graph with degree sequence = (d_i)_i ∈ [n], with average degree d̅, satisfying 1/n |{i: d_i ≥ k }| ≤ A d̅^τ -1 k^1-τ for all i, with constants τ >2 and A >0. For b = 0.1 ( (τ-2)/8A)^1/2(τ -2) and sufficiently large n, (G) ≥ b d̅^-1/2. For convenience, we may assume that τ≤ 3; indeed, if a sequence satisfies (<ref>) with some τ', then it also satisfies it with some smaller value τ < τ'. To verify the hypothesis of Proposition of <ref>, let s_j = |{i: d_i ≥ j }| and note that s_j- s_j+1 = {i: d_i = j }. We have ∑_i ∈ [n]: d_i ≥ k d_i = ∑_j ≥ k j^ (s_j - s_j+1) = ∑_j ≥ k j s_j - ∑_j ≥ k+1 (j-1) s_j = k s_k + ∑_j ≥ k+1 s_j ≤ A d̅^τ -1 k^2-τn + ∑_j ≥ k+1 Ad̅^τ -1 j^1-τ n, where we second equality follows by changing the summation variable in the second sum, and the inequality uses the hypothesis on . Since ∑_j ≥ k+1 j^1-τ≤∫_k^∞ x^1-τ dx = 1/τ-2 k^2-τ, we have ∑_i ∈ [n]: d_i ≥ k d_i ≤( 1 + 1/τ -2) A d̅^τ -1 k^2-τ n. Inserting k = (4 A ·τ -1/τ-2)^1/(τ -2)d̅ and dividing by n d̅ / 2, we obtain 2/n d̅∑_i ∈ [n]: d_i ≥ k d_i ≤2A (τ -1)/τ -2·(4 A ·τ -1/τ-2)^-1 = 1/2. Hence ∑_i ∈ [n]: d_i < k d_i ≥ n d̅ - n d̅/4 = n d̅/2(1 + 1/2), and the hypothesis of our proposition is satisfied with γ = 12 and C = (4 A ·τ -1/τ-2)^1/τ -2≤( 8A/τ-2)^1/τ -2. Proposition <ref> then implies that (G) ≥ 0.26 γ( 8A/τ-2)^-1/2(τ -2)d̅ ^-1/2 -O(ϑ) - 4 Δ(G)^2/n^2 d̅^2. Now, recall ϑ = (Cd̅)^10/n for the value C earlier and thus ϑ=O(n^-1). For the other error term note (<ref>) implies that Δ(G) ≤ (An)^-11-τd̅. It follows that, for sufficiently large n, (G) ≥ 0.1 (8A/τ-2)^-1/2(τ -2)d̅ ^-1/2. Let us also point out a more general statement, which controls modularity in terms of moments of the degree sequence. The moments are one way to capture an assumption that the degree distribution is still `reasonably smooth'. Note that in the statement below, κ can be an arbitrarily small positive real number, to circumvent the fact that for some graph classes occurring in practice, not even the second moment of the degree sequence is bounded. This statement formally implies Theorem <ref>, but verifying this implication is as difficult as proving Theorem <ref> directly. Let G be a graph with degree sequence = (d_1, …, d_n) whose mean is d̅ = O(1). Suppose for some κ >0 and B>0, ∑_v ∈ [n] d_v^1 + κ≤ B n d̅ ^1+κ There is a constant c' such that for sufficiently large n, (G) ≥ c' d̅ ^-1/2. Let L be the set of vertices of degree at most (4B)^1/κd̅ and denotes its complement by H=L^c. We claim that then (H) ≤d̅ n / 4. For, B d̅ ^1+κ n ≥∑_v d_v^1+ κ≥∑_v ∉ T d_v^1 + κ≥(min_v ∈ T^c d_v )^κ·(H). Noting that d_v^κ≥ 4B d̅ ^κ for v ∈ H and rearranging gives nd̅/4≥ (H), as required. Hence, (L) ≥3nd̅/4, so we may apply Theorem <ref> with C = (4B)^1/κ and γ = 1/2. It follows that (G) = Ω(B^-1/2 κd̅ ^-1/2), where Ω hides an absolute constant. PA_n^(m, δ) (1 + O(m^-1) ) §.§ Preferential attachment graphs and related models Preferential attachment models (PAM) describe graphs which grow in time, that is, vertices are sequentially added to the graph. Given the graph at time t, a vertex with label t+1 is added to the graph and attached to older vertices according to a probability distribution according to which it is more likely to attach to high-degree vertices. Thus the degree sequence of such a graph is not specified a priori, but emerges from the attachment rule. The degree sequence of the classical PAM considered for instance in <cit.> typically follows a power-law with the exponent τ = 3. In this section, we demonstrate how Theorem <ref> can be applied to an entire class of PAMs which realise every power-law exponent τ with τ >2. We will be working with the model presented in <cit.>, and we follow their notation. In a graph G on the vertex set {v_1, …, v_n} let D_i(n) denote the degree of v_i, and let P_k(n) = 1/n{i ∈ [n]: D_i(n) = k } be the proportion of vertices of degree k. At time t the graph has vertex set {v_1, …, v_t} and vertex i has degree D_i(t). The model has parameters m ∈, which governs the average degree, and -m <δ < m. It produces a graph sequence denoted by which, at time n, has n vertices and mn edges. The first vertex v_1 has m loops. At time t, the vertex v_t is added, along with m edges e_1, …, e_m incident to v_t. The other endpoint of the edge e_i is a vertex v_j ∈{v_1, …, v_t } with probability roughly proportional to D_i(t) + δ (that is, an affine function of the current degree of v_i). For a full description of the model see <cit.> from which all the results which we use are taken. We remark that the average degree of this graph is 2m, which does conflict with the use of m (for the number of edges) in the previous section. For this specific model, Ross <cit.> showed that the degree sequence follows a power-law with exponent τ = 3 + δ/m > 2. Such results were first obtained by Bollobás and Riordan <cit.>, for the less general model with δ = 0 and τ =3. Thus the results of the previous section in principle imply that such graphs have high modularity, but to prove a rigorous result, we need to deal with loops and multiple edges in the model, as well as with the fact that the results from <cit.> (and also <cit.>) do not a priori give sufficient bounds on the number of vertices of degree, say n^1/5. Recall that P_k(n) is the proportion of vertices of degree k in . Let p_k = p_k(m,δ) be the probability mass function defined in <cit.> and in Appendix <ref>; p_k(n) will be the limiting degree distribution for , and for the present we will only use the estimates p_k = k^-3+δ/m(2 + δ/m)(m+δ)^3 + δ/m(1 + O(m^-1) ) ≤ 2^5 k^-3+δ/mm^3+δ/m, where the second inequality follows from 3 + δ/m≤ 4. Let τ = -3+δ/m. We will need the following facts deduced from <cit.>; the proof is deferred to after the main theorem, see page proof:pam-deg. With high probability, the following holds in with δ∈ (-m, m). * For k ∈ [n], k ≤ n^1/10, and some _1 >0, P_k(n) = p_k(1+O(n^-_1)). * For A ≤ n^1/10log^-1n, ∑_k ≥ Am kP_k(n) ≤ 2m · 32A^-τ+2/(τ-2) * ∑_k ∈ [n] k^2 P_k(t) ≤ n^1-_2 for some _2 >0. * The number of loops in is O(log^2 n), and the number of multiple edges is at most n^1-_3 for some _3>0. Now we can prove the desired bound. As mentioned the case δ=0 was proven in <cit.>. Let G̃ be an n-vertex graph obtained from G∼ after removing loops and multiple edges from G, and let δ∈ (-m, m). There is a constant c' such that whp G̃ has average degree 2m(1-o(1)), and (G̃) ≥ c' m^-1/2. Assume that satisfies the claims in Proposition <ref>, which occurs with high probability. Recall that τ = 3 + δ/m >2. Recall D_i(n) is the degree of vertex i in G. Let d_G̃(v_i) denote the degree of v_i in G̃, and clearly d_G̃(v_i) ≤ d_G(v_i) = D_i(n). Let A be a sufficiently large constant such that 32 A^-τ+2/τ-2 < 1/8, let H be the set of vertices with degree at least Am, and denote its complement by L=H^c. By item <ref>, (H) = ∑_v ∈ H d_G̃(v) ≤ n∑_k ≥ Am k P_k(n) ≤ 2mn ·32A^-τ+2/τ-2≤1/8· 2mn. By item <ref>, e(G̃) = mn(1-o(1)), so (L) ≥7/8· 2mn (1-o(1)) = 7/4 e(G)(1+o(1)) ≥3/2 e(G̃). Hence we may apply Theorem <ref> with C = A and γ = 1/2 to deduce that (G) ≥ c' m^-1/2, as required. For the classical preferential attachment model, we have δ = 0 and τ = 3, so Theorem <ref> can be applied with A = 2^8 to obtain an explicit value for c'. Before proving Proposition <ref>, we need some properties of the sequence p_k; the formal definition of p_k and the proof of the following lemma can be found in the Appendix. Let m be a positive integer, δ∈ (-m, m) and τ = -3 + δ/m. The sequence p_k=p_k(m, δ) satisfies ∑_k =m^∞ p_k = 2m. Moreover, there is a constant b_m, δ such that ∑_k=Cm^∞ k p_k ≤2^5/τ -2C^2 - τm and ∑_k=m^M k^2 p_k ≤ b_m, δmax{ M^3-τ, log M}. We can now prove Proposition <ref>. Theorem 8.3 in <cit.> states that whp, for all k, |P_k(n)-p_k| ≤log n/√(n). It follows from (<ref>) that for k ≤ n^1/10 and τ < 4, we have p_k ≥ n^-4/10. These two facts together imply <ref> holds (for any fixed _1<1/10). For item <ref>, notice that Lemma <ref> implies that ∑_k=m^Am k p_k ≥ 2m (1- 2^4/τ-2A^2-τ). Item <ref> will follow from the `complementary inequality' ∑_k=m^Am kP_k(n) ≥ 2m(1- 2^5/τ-2A^2-τ), since ∑_k ≥ m k P_k(n) =2m deterministically. Now, notice that Lemma <ref> implies that ∑_k=m^Am k p_k ≥ 2m (1- 2^4/τ-2A^2-τ). This estimate and <ref> yield (<ref>). For <ref>, we split into two ranges. For k ≤ n^1/11, by item (i), we have ∑_k ≤ n^1/11 k^2 P_k(n) ≤∑_k ≤ n^1/112 k^2 p_k. Thus by (<ref>) we have ∑_k ≤ n^1/11 k^2 P_k(n) ≤ b_m, δmax{ n^1/11(3-τ) , log n}≤ b_m, δ n^1/11. Now, by <ref>, the sum of all vertex degrees in which are higher than n^1/11 is at most C_m,δ' n^1+(2-τ)/11≤ n^1- for some > τ-2/11>0. Hence, by convexity, the sum ∑_k ≥ n^1/11k^2 P_k(n) is maximised when there is a single vertex of degree ℓ =⌊ n^1-⌋, so ∑_k ≥ n^1/11k^2 P_k(n) ≤ n^2-2·1/n≤ n^1-2. Summing the two results gives the required bound. For <ref>, we let be the event that satisfies <ref>-<ref>, and we may condition on  as it occurs with high probability. Recall that D_i(t) denotes the degree of the vertex v_i in PA_t^(m, δ) (i.e., after t vertices are added to the preferential-attachment graph). For the purposes of the present proof, it suffices to use crude upper bounds on the attachment probabilities in ; moreover, we will only use an upper bound D_i(t)≤ D_i(n) for t ≤ n. For the exact probabilities, see <cit.>. The first vertex v_1 has m loops. When adding the vertex v_t+1, m edges are attached to v_t+1, and each of them is a loop with probability at most 2(m-1)/mt, where the numerator 2m corresponds to the worst-case scenario where v_t+1 already has m-1 loops attached to it. Summing over the m edges attached to v_t+1 (for t ≥ 1) and over all t, the expected number of loops is at most m + ∑_t =1 ^n m/t≤ 2m log n. So by Markov's inequality, the number of loops in is at most log^2 n with high probability. To control multiple edges, note that <ref> implies that conditional on , ∑_i ∈ [n]D_i^2(n) = n∑_k =m^n k^2P_k(n) ≤ n^2-. Let Z_t denote the number of multiple edges v_i v_t+1 with i ≤ t. The probability that one of the m edges attached to v_t+1 is incident to a given vertex v_i (with i ≠ t+1) is at most m ·D_i(n)+δ/mt(2+δ)+(1+δ)≤D_i(n)/t. Hence the probability that v_i v_t+1 is a multiple edge is at most D_i^2(n)/t^2. Thus for t ≥ n^1-/4, 𝔼 [Z_t |] ≤1/t^2∑_i ∈ [n]D_i^2(n) ≤ n^-/2. Summing over t, and using the trivial upper bound Z_t ≤ m for t ≤ n^1- / 4, we get that the expected number of multiple edges is at most ∑_t = 1^n 𝔼 [Z_t | ] ≤ n^1- / 4 m+ n^1- /2≤ 2mn^1-/4. Again, using Markov's Inequality, we have that ∑_t Z_t ≤ n^1-/5 with high probability. § UPPER BOUNDS ON MODULARITY In this section, we show that for a large class of sequences , typical graphs with degree sequence approximately actually have modularity O(d̅^-1/2), matching the lower bound from Theorem <ref> up to a constant factor. We consider the Chung-Lu model of random graphs. Let = (w_v)_v ∈ [n] where each w_v>0 and denote w̅=n^-1∑_v w_v and w_ min=min_v w_v. We will also assume that for each v we have w_v^2=o(w̅n). Generate the random graph G(n, ) by choosing each edge uv independently with probability (where u≠ v as we do not allow loops) p_uv=w_u w_v/w̅n. We may see that the expected degree of v in G(n, ) is w_v(1-w_vw̅^-1n^-1)=w_v(1-o(1)), i.e. approximately w_v. This is why the Chung-Lu model is often referred to as the random graph with a given expected degree sequence. In fact, for a large class of degree sequences, the empirical degree sequence of G(n, ) is close to ; for details, see Theorems 6.10 and 6.19 in <cit.>. If the degree sequence of G(n, ) satisfies the assumptions of Theorem <ref>, then we can deduce that its modularity is Ω(w̅^-1/2). We will now prove an upper bound of the same order of magnitude, assuming that w_min≥ c w̅ for some constant c. Throughout this section, we write whp to mean with high probability, i.e. with probability converging to 1 with n. We recall the normalised Laplacian of a graph G is defined to be ℒ_G = I - D^-1/2AD^-1/2 where A is the adjacency matrix of G and D is the diagonal `degrees matrix' where the u-th entry on the diagonal is d_u. Let λ̅_G be the spectral gap of ℒ_G. A very nice result of Chung, Lu and Vu <cit.> is that whp λ(G(n, w)) > 1 - 4w̅^-1/2 (1 + o(1)) -w_ min^-1ln^2 n. Now we recall that the modularity of a graph is bounded above by its spectral gap see for example Lemma 6.1 of <cit.>: (G) ≤λ̅(G). Thus the result of <cit.> immediately gives the following corollary. Also recall that the modularity value is robust to changes in the edge-set, if we may obtain H from G by deleting at most · e(G) edges then |(G)-(H)|<2, by Lemma 5.1 of <cit.> (we will use this to obtain Corollary <ref>). Suppose is a degree sequence with w_ min=ω(ln^2 n). Then (G(n, )) ≤ 4w̅^-1/2 (1 + o(1)) . For a larger class of , Coja-Oghlan and Lanka <cit.> show lower bounds on the spectral gap not on the entire graph G(n, ) but for an induced subgraph which comprises most of the volume of the graph. There exists constants c_0 and w_0 such that the following holds. If  satisfies w_0 ≤ w_ min≤ w_ max≤ n^0.99 then whp G contains an induced subgraph H with (i) λ̅_H ≥ 1- c_0 w_ min^-1/2 and (ii) e(H)≥ e(G) - nexp(-w_ min/c_0). There exists constants c_0, w_0 such that the following holds. If satisfies w_0 ≤ w_ min≤ w_ max≤ n^0.99, then whp (G(n, w)) ≤ c_0 w_ min^-1/2 . The corollary follows almost immediately from Theorem <ref>. Since (G)=w̅ n = ω(1) we get that whp (G)=w̅n(1+o(1)) ≥2/3w̅n. Thus whp G(n, ) contains a subgraph H with e(H)/e(G)≥ 1 - n/e(G)≥ 1 - 3/w̅=1-o(1). Hence by the spectral upper bound with high probability, (G(n, w)) ≤ c_0' w_min^-1/2, which implies the result. § CONCLUDING REMARKS For a large class of sequences 𝐝, we showed that any graph with degree sequence 𝐝 has modularity Ω (d̅^-1/2), improving on the previously known lower bound of order d̅^-1. Specifically, this bound applies to graphs with a power-law degree sequence, which includes preferential-attachment graphs (under suitable models). However, to our knowledge, the best known upper bound on the modularity of the preferential-attachment graph is 15/16 <cit.>. Preferential-attachment graphs are not sampled with an inherent community structure, so one might expect their modularity to decay with the average degree d̅, which is also suggested in <cit.> where they showed a lower bound of Ω(d̅^-1/2). It would be very interesting to prove such an upper bound, and perhaps even a bound of order O(d̅^-1/2). § PROOF OF LEMMA <REF> To show how our weight-balanced bisection leads to a high modularity partition we used Lemma <ref>. This lemma gives a lower bound on the modularity score of a partition intro three parts: two parts with near equal volume and a remainder part. Here we provide the (short) proof of this lemma, which we repeat below for convenience. Let G be a graph and ={A,B,R} a vertex partition of G with (R)≤(G)/3. Then q_(G) ≥ 1/2 - e(A,B)/e(G) - ((A)-(B))^2/2(G)^2 For vertex set R, we write ∂(R) for the number of edges with exactly one endpoint in R. Thus the edge contribution for on G is q^E_(G) = 1 - 1m(e(A,B) + ∂(R) ) = 12 + 1m(12e(G)-e(A,B) ) - 1m∂(R) . For the degree tax, roughly speaking, (R) is negligibly small and parts A and B are of similar volume, i.e. (A)≈(B)≈(G)/2. Thus the degree tax is near what it would be if we had two parts of exactly equal volume (which would be (1/2)^2+(1/2)^2=1/2). Define t, a measure of the near-ness of the volumes of A and B, by (A)=(B)+ t·(G) and r, the scaled size of the remainder, by (R)=r·(G). Hence (A)=(G)-(B)-(R) = (G)-(A)+(t-r)·(G) and thus, (with similar calculations for (B)), (A)/(G)=12(1+t)-12r (B)/(G)=12(1-t)-12 r Now we may calculate the degree tax q_^D(G)=(12(1+t)-12 r)^2+(12(1-t)-12 r)^2+r^2=12+12 t^2-r+32r^2. To put the bounds together, note ∂(R)≤(R) and hence ∂(R)/m = ∂(R)/(2(G)) ≤ r/2. Hence the modularity score is at least q^E_(G)-q^D_(G) ≥ (12 + 1m(12e(G)-e(A,B) ) - 12r) - ( 12+12 t^2-r+32r^2 ) = 1m(12e(G)-e(A,B) ) -12 t^2 + 12r -32r^2 Since (R) ≤(G)/3, i.e. r≤ 1/3, we have r/2-3r^2/2= r(1-3r)/2 ≥ 0 which yields the result. § THE LIMITING DEGREE DISTRIBUTION OF THE PREFERENTIAL ATTACHMENT MODEL In this Section, we prove Lemma <ref>, which summarises some properties of the probability mass function p_k = p_k(m+ δ). The main difficulty is proving that ∑_k ≥ m kp_k = 2m, for which we use an alternative characterisation of p_k as a distribution of a random variable X, which can be found in <cit.>. Let X(p) be a random variable with distribution X(p)=k = Γ(m+δ+k)/k!Γ(r)p^m+δ(1-p)^k, which is usually called the negative binomial distribution with parameters m+δ and p. (For m+δ∈, X describes the time of the r-th success in a sequence of independent experiments with success probability p.) We have X(p) = (m+δ)(1-p)/p. Let U be a uniform random variable in [0, 1]; then X(U^1/(2+δ/m)) has the negative binomial distribution with a random parameter p = U^1/(2+δ/m). The function (p_k)_k ≥ m can be described as p_k = UX(U^1/(2+δ/m)) =k-m , often referred to as a mixed distribution. For convenience, we define p_k = 0 for k<m. We remark that from this description, it is clear that ∑_k p_k = 0. Let 2+δ/m=a >1. We have, using the definition of p_k and linearity of expectation, ∑_k ≥ m k p_k = ∑_ℓ≥ 0 (m + ℓ) p_m+ℓ = m + U∑_ℓ≥ 0ℓ X(U^1/a)=ℓ = m + ∫_0^1 (m+δ) ·1- u^1/a/u^1/ad u= m + (m+δ)·1/a-1 = m + m+δ/1/m (m + δ) = 2m; where the convergence of the integral follows from 1/a < 1. Let τ = 3 + δ/m >2. For (<ref>), we use the approximation (<ref>), which we restate here as p_k = ≤ 2^5 k^-τm^τ -1. We have ∑_k = Cm^∞ kp_k ≤ 2^5 m^τ-1∑_k=Cm^∞ k^1-τ≤ 2^5m^τ -1·(Cm)^2-τ/τ - 2≤2^5 C^2-τ/τ-2m, as required. Similarly, for (<ref>), we can subsume all terms that are constant with respect to n into a constant b' = b_m, δ and obtain ∑_k=m^M k^2 p_k ≤ b'_m, δ∑_k=m^M k^2-τ. Since the sum ∑_k k^2-τ may diverge (and does for τ≤ 3), it is dominated by the top terms, so ∑_k=m^M k^2 p_k ≤ b_m, δmax{M^3-τ, log M}.
http://arxiv.org/abs/2307.04937v1
20230710232803
Improving Fairness of Graph Neural Networks: A Graph Counterfactual Perspective
[ "Zhimeng Guo", "Jialiang Li", "Teng Xiao", "Yao Ma", "Suhang Wang" ]
cs.LG
[ "cs.LG" ]
The Pennsylvania State University United States [email protected] New Jersey Institute of Technology United States [email protected] The Pennsylvania State University United States [email protected] New Jersey Institute of Technology United States [email protected] The Pennsylvania State University United States [email protected] Graph neural networks have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks. Despite their great performance in modeling graphs, recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios. Hence, many efforts have been taken for fairness-aware GNNs. However, most existing fair GNNs learn fair node representations by adopting statistical fairness notions, which may fail to alleviate bias in the presence of statistical anomalies. Motivated by causal theory, there are several attempts utilizing graph counterfactual fairness to mitigate root causes of unfairness. However, these methods suffer from non-realistic counterfactuals obtained by perturbation or generation. In this paper, we take a causal view on fair graph learning problem. Guided by the casual analysis, we propose a novel framework , which can select counterfactuals from training data to avoid non-realistic counterfactuals and adopt selected counterfactuals to learn fair node representations for node classification task. Extensive experiments on synthetic and real-world datasets show the effectiveness of . <ccs2012> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10010455</concept_id> <concept_desc>Applied computing Law, social and behavioral sciences</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Computing methodologies Machine learning [300]Applied computing Law, social and behavioral sciences 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 Improving Fairness of Graph Neural Networks: A Graph Counterfactual Perspective Suhang Wang October 2023 =============================================================================== § INTRODUCTION Graphs are pervasive in real-world, such as knowledge graphs <cit.>, social networks <cit.> and biological networks <cit.>. Recently, graph neural networks (GNNs) <cit.> have shown great ability in modeling graph-structural data. Generally, GNNs adopt the message passing mechanism, which updates a node's representation by iteratively aggregating its neighbors' representations. The resulting representation preserves both node attributes and local graph structure information, facilitating various downstream tasks such as node classification <cit.> and link prediction <cit.>. Despite their great performance, recent studies <cit.> show that GNNs tend to inherit bias from training data, which may result in biased predictions towards sensitive attributes, such as age, gender and race. In addition, the message passing mechanism of GNNs and graph structure could magnify the bias <cit.>. For example, in social networks, nodes of the same race are more likely to connect to each other. The message passing of GNNs would make the representation of linked nodes similar, resulting in a high correlation of node representation with race, hence the biased prediction. The biased prediction has raised concerns from ethical and societal perspectives, which severely limits the adoption of GNNs in high-stake decision-making systems, such as job applicants ranking <cit.> and criminal prediction <cit.>. Hence, many efforts have been taken for fair GNNs <cit.>. However, most existing methods are based on statistical fairness notions, which aim to make statistically fair predictions for different sub-groups or individuals <cit.>. Several works have pointed out such fairness notions fail to detect discrimination in the presence of statistical anomalies <cit.>. Therefore, there has been a recent shift toward counterfactual fairness in graph modeling <cit.>. This approach aims to eradicate the root causes of bias by mapping the causal relationships among variables. The identified causal structure allows for the adjustment of sensitive data to generate counterfactuals, ensuring that the prediction remains unaltered by the sensitive information through the utilization of these counterfactuals. For example, NIFTY <cit.> perturbs sensitive attributes to obtain counterfactuals and maximizes the similarity between original representations and perturbed representations to make representations invariant to sensitive attributes. GEAR <cit.> adopts GraphVAE <cit.> to generate counterfactuals and minimizes the discrepancy between original representations and counterfactual representations to get rid of the influence of sensitive attributes. Despite their superior performance, existing graph counterfactual fairness works need to flip sensitive attributes or generate counterfactuals with GraphVAE, which can easily result in non-realistic counterfactuals Such non-realistic counterfactuals may disrupt the underlying latent semantic structure, thereby potentially undermining the model's performance. This is because simply flipping sensitive attributes cannot model the influence on other features or graph structure causally caused by sensitive attributes <cit.>, and the generative approach lacks supervision of real counterfactuals and could be over-complicated <cit.>. Motivated by the discussion above, in this paper, we investigate whether one can obtain counterfactuals within the training data. For example, if a female applicant was rejected by a college, we aim to find another male applicant who has a similar background as the counterfactual applicant. Thus, we can get realistic counterfactuals and avoid the ill-supervised generation process. To achieve our goal, we are faced with several challenges: (i) Graph data is quite complex, thus it is infeasible to directly find counterfactuals in the original data space. Besides, some guidance or rules are needed to find the counterfactuals. (ii) To achieve graph counterfactual fairness, learned representation should be invariant to sensitive attributes and information causally influenced by sensitive attributes. It is critical to design proper supervision to help models get rid of sensitive information. To tackle the aforementioned challenges, we propose a casual view of the graph, label and sensitive attribute. The causal interpretation guides us to find counterfactuals and learn disentangled representations, where the disentangled content representations are informative to the labels and invariant to the sensitive attributes. Guided by the causal analysis, we propose a novel framework, Counterfactual Augmented Fair GNN (), to simultaneously learn fair node representations for graph counterfactual fairness and keep the performance on node classification tasks. Specifically, based on the causal interpretation, we derive several constraints to enforce the learned representations being invariant across different sensitive attributes. To obtain proper counterfactuals to guide representation learning, we utilize labels and sensitive attributes as guidance to filter out potential counterfactuals in representation space. The main contributions of our work can be summarized as: * We provide a causal formulation of the fair graph learning process and fair node representation learning task. * We propose a novel framework to learn node representations for graph counterfactual fairness. Specifically, we find counterfactuals in representation space and design novel constraints to learn the content representations. * We conduct extensive experiments on real-world datasets and synthetic dataset to show the effectiveness of our model on the fairness-prediction trade-off. § RELATED WORKS In this section, we review related works, including graph neural networks and fairness-aware GNNs. §.§ Graph Neural Networks Graph neural networks (GNNs) have dominated various tasks on graph-structured data, such as node classification <cit.>, graph classification <cit.> and link prediction <cit.>. Existing GNNs can be categorized into spatial-based GNNs and spectral-based GNNs. Spatial-based GNNs leverage the graph structure directly, focusing on the relationships between nodes and their immediate neighbors to inform feature learning. On the other hand, spectral-based GNNs operate in the spectral domain defined by the graph Laplacian and its eigenvectors, making them better suited to capture global properties of the graph. The superior performance of GNNs has greatly extended their application scenarios <cit.>. For example, banks may leverage GNNs to process transaction networks to detect the abnormal behavior of users <cit.>. The applications in critical decision-making systems place higher requirements for GNNs, such as being fair and interpretable <cit.>. Despite their extensive utility and efficacy, recent studies <cit.> show that GNNs can harbor implicit biases on different groups, which can lead to skewed or unfair outcomes. This bias issue is particularly critical when GNNs are deployed in high-stake scenarios, making it necessary to ensure fairness in the modeling process <cit.>. Thus, mitigating bias and promoting fairness in GNNs are active and necessary research areas <cit.>. The source of bias in Graph Neural Networks (GNNs) primarily originates from two areas. First, it comes from the inherent bias in the input data, which may contain unequal representation or prejudiced information about nodes or connections in the graph. Second, the bias can stem from the algorithmic design of the GNN itself, which may unintentionally emphasize certain features or connections over others during the learning process. Therefore, there is a trend for the research community to design fairer GNN models to deal with graph-based tasks <cit.>. §.§ Fairness in GNNs Fairness is a widely-existed issue of machine learning systems <cit.>. Researchers evaluate the fairness of models with many kinds of fairness notions, including group fairness <cit.>, individual fairness <cit.> and counterfactual fairness <cit.>. The metrics can also be used to measure the fairness performance of GNNs <cit.>. The commonly used fairness notions in GNNs are statistical parity <cit.> and equal opportunity <cit.>. FairGNN <cit.> utilizes adversarial training to establish fairness in graph-based models, refining its representation through an adversary tasked with predicting sensitive attributes. EDITS <cit.>, on the other hand, is a pre-processing technique that focuses on ensuring fairness in graph learning. It aims to eliminate sensitive information from the graph data by correcting any inherent biases present within the input network. However, these methods and their metrics are developed based on correlation <cit.>, which has been found to be unable to deal with statistical anomalies, such as Simpson's paradox <cit.>. Based on the causal theory, counterfactual fairness can model the causal relationships and gets rid of the correlation-induced abnormal behavior <cit.>. There is an increasing interest to apply counterfactual fairness on graphs to design fairer GNNs <cit.>. NIFTY <cit.> perturbs sensitive attributes for each node to obtain counterfactuals and omits the causal relationships among variables. GEAR <cit.> uses GraphVAE <cit.> to generate the graph structure and node features causally caused by the sensitive attributes. However, the encoder-decoder-encoder scheme is over-complex and may suffer from information loss. Our paper is inherently different from existing work: (i) Unlike existing works that might generate unrealistic counterfactuals, our work avoids the generation process and selects counterfactuals with sensitive attributes and labels as guidance; and (ii) We propose a causal view to understand the source of bias. Based on the causal interpretation, we also design several constraints to help our model learn the fair node representations. § PRELIMINARIES In this section, we start by introducing the necessary notation and defining the problem at hand. Following this, we employ the Structural Causal Model to frame the issue, which will then motivate our solution - the disentangled fair representation learning method. §.§ Notations and Problem Definition Throughout the paper, we use italicized uppercase letters to represent random variables (e.g., S, E) and use italicized lowercase letters to denote the specific value of scalars (e.g., s, y_i). Non-italicized bold lowercase and uppercase letters are used to denote specific values of vectors (e.g., 𝐱_i) and matrices (e.g., 𝐗), respectively. Let 𝒢=(𝒱, ℰ, 𝐗) denote an attributed graph, where 𝒱={v_1, ..., v_N} is the set of N nodes, ℰ⊆𝒱×𝒱 is the set of edges, 𝐗∈ℝ^N × D is the node attribute matrix. The i-th row of 𝐗, i.e., 𝐱_i is the feature vector of node v_i. 𝐀∈{0,1}^N × N is the adjacency matrix of the graph 𝒢, where 𝐀_ij=1 if nodes v_i and v_j are connected; otherwise 𝐀_ij=0. We use 𝐬∈{0,1}^N × 1 to denote the sensitive attributes, where s_i is the sensitive attribute of v_i. Following <cit.>, we only consider binary sensitive attributes and leave the extension of multi-category sensitive attributes as future work. We use 𝐲∈{1,...,c}^N× 1 to denote the ground-truth node labels, where y_i is the label of v_i. In this paper, we assume that both target labels and sensitive attributes are binary variables for convenience. For the semi-supervised node classification task, only part of nodes 𝒱_L ∈𝒱 are labeled for training and the remaining nodes 𝒱_U=𝒱\𝒱_L are unlabeled. The goal is to train a classifier f to predict the labels of unlabeled nodes, which has satisfied node classification performance and fairness performance simultaneously. Given 𝐗, 𝐀 and 𝐘_L, the goal of semi-supervised node classification is to learn a mapping function f to predict the labels of unlabeled nodes, i.e., f: (𝐀, 𝐗) →𝒴_U, where 𝒴_U the set of predicted labels of unlabeled nodes 𝒱_U. §.§ The Desiderata for Fair Graph Learning GNNs have shown remarkable capabilities in the realm of semi-supervised node classification. However, they are not immune to bias issues, primarily stemming from imbalanced or prejudiced input data, and potentially from the structural design of the GNNs themselves, which may inadvertently prioritize certain features or connections. Therefore, substantial efforts have been directed towards developing fairness-aware methodologies within GNNs. The majority of these methods strive to ensure correlation-based fairness notions, such as demographic parity or equality of opportunity. However, these correlation-based fairness notions can be inherently flawed, particularly in the presence of statistical anomalies, which calls for more nuanced and robust approaches to achieve fairness in GNNs. Recent advance <cit.> shows that causal-based fairness notions can help resolve this issue. Thus, to help design a fair GNN classifier, we take a deep causal look under the observed graph. Without loss of generality, in this work, we focus on the node classification task and construct a Structural Causal Model <cit.> in Figure <ref>. It presents the causal relationships among five variables: sensitive attribute S, ground-truth label Y, environment feature E, content feature C and ego-graph G for each node. Each link denotes a deterministic causal relationship between two variables. We list the following explanations for the SCM: * S → E. The variable E denotes latent environment features that are determined by the sensitive attribute S. For example, people of different genders will have different heights or other physical characteristics, where S is the sensitive attribute of genders and E is physical characteristics that are causally determined by the sensitive attribute. This relationship will lead to bias in latent feature space, which we will explain shortly. * C → Y. The variable C denotes the content feature that determines ground-truth label Y. Taking the credit scoring as an example, ideally, we assign credit scores using personal information not related to the sensitive attribute, i.e., we use content feature C instead of E to assign credit score Y. * E → G ← C. The ego-graph G is determined by the content feature C and the environment feature E, which are two disjoint parts. E and C are latent features and G is the observed ego-graph. Considering a one-hop ego-graph, it contains the social connections of center node and the observed feature of center node. The causal relationship indicates environment feature E and content feature C can determine one's social connections and personal features (node attributes). The SCM paves us a way to understand the source of bias and how to design a fair GNN classifier. Next, we give details about source of bias and disentangled learning. Our objective is to approximate the content feature C with a content representation denoted as Ĉ, and similarly, approximate the environment feature E with an environment representation denoted as Ê. To streamline our discussion, we will slightly abuse notation by also employing the symbols C and E to signify the corresponding content and environment representations throughout the remainder of the paper. §.§.§ Source of Bias From the causal graph, we can observe that the sensitive variable S and the label variable Y are independent with each other, i.e., the only path from S to Y, S→ E → G ← C ← Y is blocked by the collider G. However, it is worthy noting that S and Y are dependent conditioned on G, i.e., P(Y,S|G) P(Y|G) P(S|G) The conditional dependency of Y and S on G is one major reason that leads to biased prediction. If we directly learn a GNN model that aims to predict Y based on G, as Y and S are dependent given G, the learned label Y will have correlation with S, resulting in the biased prediction on sensitive attribute S. Alternatively, we can understand the bias by treating existing GNNs as composed of a feature extractor g and a classifier c. The feature extractor g takes the subgraph centered at a node as input and learns node representation as 𝐳 = g(G). Then the classifier c uses the representation 𝐳 to predict the label as ŷ = c(𝐳). As G is dependent on E and C, the learned representation 𝐳 is likely to contain mixed information of both E and C. Hence, the predicted label ŷ is also likely to have correlation with S. §.§.§ Disentangled Fair Representation Learning From the above analysis motivates, we can observe that in order to have fair prediction, we need to learn disentangled representation E and C to block the path from S to Y conditioned on G, and only use the content information C to predict Y, i.e., P(Y|C). As C determines Y, it contains all the label information to predict Y. Meanwhile, observing E and C can block the conditional path from S to Y, i.e., P(Y,S|E,C,G)=P(Y|C,E,G)P(S|C,E,G). Note that observing C blocks the path from E to Y and the path from G to Y. Hence, we have P(Y|C,E,G) = P(Y|C). Meanwhile, observing E blocks the path from S to G and the path from S to C, thus, we have P(S|C,E,G)=P(S|E). This gives us P(Y,S|E,C,G)=P(Y|C)P(S|E) The above equation shows that observing E and C would make Y and S independent and P(Y|C) is unbiased. Hence, if we can learn disentangled latent representation E and C, we would be able to use C for fair classification. However, the main challenge is we do not have ground-truth E and C to help us train a model that can learn disentangled representation. With a slight abuse of notation, we also use C to denote the learned content representation and use E to denote the learned environment representation. Fortunately, we can use the SCM to derive several properties of the optimal representation, which would be used to help learn the latent representation of C and E: * Invariance: C E. This property can be understood in two perspectives. That is, the content representations should be independent to the sensitive attributes and the environment representation induced by the sensitive attribute. Meanwhile, the environment representations should be independent to the labels and the content representation which is informative to the labels. * Sufficiency: (C, E) → G. The combined representation can used to reconstruct the observed graph. * Informativeness: C → Y. The content representations should have the capacity to give accurate predictions of labels Y. § METHODOLOGY The causal view suggests us to learn disentangled representation 𝐜 and 𝐞 for node v, with 𝐜 capturing the content information that is useful for label prediction and irrelevant to sensitive attributes, and 𝐞 capturing the environment information depends on sensitive attribute only. With the disentanglement, 𝐜 can be used to give fair predictions. However, how to effectively disentangle 𝐜 and 𝐞 remains a question given that we do not have ground-truth of disentangled representation. Intuitively, for a node v with sensitive attribute s, its content representation 𝐜 should remain the same when the sensitive attribute is flipped to 1-s while its environment representation 𝐞 should change correspondingly. Hence, if we know the counterfactual of node v, we will be able to utilize the counterfactual to help learn disentangled representation for fair classification; while the counterfactual is not observed. To address the challenges, we propose a novel framework as shown in Figure <ref> (a), which is composed of: (i) a GNN encoder that takes ego-graph 𝒢 of node v to learn disentangled representation 𝐜 and 𝐞; (ii) the counterfactual augmentation module, which aims to discover counterfactual for each factual observation and utilize the counterfactual to help learn disentangled representation; (iii) a fair classifier which takes 𝐜 as input for fair classification. Next, we give the details of each component. §.§ Disentangled Representation Learning For each node v_i, the content representation 𝐜_i should capture the important node attribute and neighborhood information for predicting the label while the environment representation 𝐞_i should capture all important information relevant to sensitive attribute. As GNNs have shown great ability in modeling graph structured data, we adopt GNNs to learn 𝐜_i and 𝐞_i. Instead of adopting two GNNs to learn 𝐜_i and 𝐞_i separately, to reduce the number of parameters, we adopt one GNN to learn 𝐜_i and 𝐞_i. We empirically found that using two GNNs and one GNN have similar performance due to constraints we designed to disentangle 𝐜_i and 𝐞_i, which will be introduced later. Specifically, the GNN f_θ parameterized by θ takes 𝒢 as input and learns representation as: [𝐂, 𝐄] = 𝐇 = f_θ(𝐀, 𝐗) where 𝐇∈ℝ^N × d is the learned representation matrix with the i-th row, i.e., 𝐡_i, as the representation of node v_i. We treat the first d_c columns as the content representation matrix 𝐂 and use the next d_e columns as the environment representation matrix 𝐄. Note that d = d_c+d_e. In our implementation, we set d_c = d_e. 𝐂∈ℝ^N × d_c is the content representation matrix with the i-th row, i.e., 𝐜_i, as the content representation of node v_i. Similarly, 𝐄∈ℝ^N × d_e is the environment representation matrix with the i-the row, i.e., 𝐞_i as the environment representation of node v_i. f_θ is flexible to be various GNNs such as GCN <cit.> and GraphSAGE <cit.>. To make sure 𝐜_i captures the content information for fair label prediction, and 𝐞_i and 𝐜_i are disentangled, based on the causal analysis in Section <ref>, we add following constraints: Informativeness Constraint. First, the content representation 𝐜_i should be informative to the downstream tasks, i.e., C → Y. Hence, for node v_i, we should be able to get accurate label prediction from 𝐜_i. Hence, we introduce a classifier f_ϕ with model parameter ϕ. It takes 𝐜_i as input and predicts the class distribution of v_i as: 𝐲̂_i=f_ϕ(𝐜_i), The loss function for training the classifier is given as: ℒ_pred = 1/|𝒱_L|∑_v_i ∈𝒱_Lℓ(𝐲̂_i, 𝐲_i) where 𝐲_i is the one-hot encoding of ground-truth label of v_i. ℓ(𝐲̂_i, 𝐲_i) denotes the cross entropy between 𝐲̂_i and 𝐲_i. Sufficiency Constraint. As shown in our causal view, the representation (𝐜_i and 𝐞_i) should be sufficient to reconstruct the observed factual graph 𝒢_i. In disentangled representation learning research, the reconstruction supervision is usually adopted to guide the learning process <cit.>. However, existing graph counterfactual fairness approaches <cit.> fail to provide supervision to preserve graph information in the representations. Thus, they put their models at a risk of being stuck in trivial solutions to merely get spurious information in the representations, which contradicts the SCM and is not sufficient to reconstruct the observed graph 𝒢_i . In our model, we formalize the sufficiency constraint as a reconstruction of the graph structure. Specifically, for a pair of nodes (v_i,v_j), we predict the link existence probability as p_ij = σ(𝐡_i 𝐡_j^T), where 𝐡_i = [𝐜_i, 𝐞_i] is the node representation of node v_i. The sufficiency constraint is ℒ_suf = 1/|ℰ|+|ℰ^-|∑_(v_i,v_j) ∈ℰ∪ℰ^- -e_ijlog p_ij - (1 - e_ij) log p_ij where ℰ^- is the set of sampled negative edges. e_ij=1 if node v_i and v_j are connected; otherwise e_ij=0. Orthogonal Constraint. The above model can help to learn 𝐜_i that captures graph information for label prediction, however, it doesn't guarantee that 𝐜_i doesn't contain sensitive attribute information. To make sure that 𝐜_i and 𝐞_i are disentangled, i.e., 𝐜_i doesn't contain any environment information relevant to sensitive attribute, we further impose the orthogonal constraint, i.e., 𝐜_i^T 𝐞_i = 0. §.§ Counterfactual Augmented Learning As we do not have ground-truth of 𝐜_i and 𝐞_i, we used several constraints to implicitly supervise the learning of 𝐜_i and 𝐞_i. To fully learn disentangled 𝐜_i and 𝐞_i, we propose to learn better 𝐞_i and 𝐜_i that follows the counterfactual constraints. As shown in Figure <ref> (b), generally, for a node v_i with observe the factual sensitive attribute s_i and label y_i, its content representation 𝐜_i should remain similar when the sensitive attribute is flipped to 1-s_i but its environment representation 𝐞_i should change correspondingly, which forms the counterfactual subgraph 𝒢_i^e. Similarly, when flip label y_i but keep the sensitive attribute s_i unchanged, then v_i's environment representation 𝐞_i remain the same, while its content representation should change accordingly, leading to the counterfactual subgraph 𝒢_i^c. Thus, if we know 𝒢_i^e and 𝒢_i^c, we would be able to use these counterfactual graphs together with factual graph 𝒢_i to guide the learning of 𝐜_i and 𝐞_i. However, in real-world, we can only observe factual graphs. To solve this challenge, we propose to find potential candidate counterfactuals with the observed factual graphs. The sensitive attribute and label are used to find counterfactuals in our model. Considering the fair credit scoring problem, when someone was assigned a low score, straightforward thinking is to know the results of people who have a similar background to her but of a different gender. For example, Sarah, a female, got a low credit score. Then she may ask, what if I were a male, what will my credit score be? This thinking inspires us to directly find counterfactuals from the observed node samples instead of performing perturbation or generating <cit.>. The advantages of selecting counterfactuals from the observed node samples are twofold: (1) It avoids making assumptions about the graph generation process with sensitive attributes. (2) It does not need additional supervision signal. Compared with GEAR <cit.>, we do not need additional supervision to guide counterfactual selection. Another problem comes: selecting counterfactuals from the original data space is also challenging due to the complexity of graph distance calculation. To get counterfactual 𝒢^e_i, we need to find some nodes which have different sensitive attribute and the same label. Similarly, we find some nodes with the same sensitive attribute and different labels as counterfactual 𝒢^c_i. The task can be formalized as: 𝒢^c_i = _𝒢_j ∈𝔾{m(𝒢_i, 𝒢_j) | y_i ≠ y_j, s_i = s_j } 𝒢^e_i = _𝒢_j ∈𝔾{m(𝒢_i, 𝒢_j) | y_i = y_j, s_i ≠ s_j } where 𝔾={𝒢_i | v_i ∈𝒱)} and m(·, ·) is a metric of measuring the distance between a pair of subgraphs. Nevertheless, the problem of computing the distance of pairs of graphs is inefficient and infeasible due to the complex graph structure and large search space <cit.>. As we already have node representations 𝐡_i = [𝐜_i,𝐞_i] that capture the graph structure and node attribute information, we propose to measure the distance in the latent space, which can greatly reduce the computation burden. Then the counterfactual graph searching problem in Eq. (<ref>) and Eq. (<ref>) is converted to the problem below: 𝐡^c_i = _𝐡_j ∈ℍ{𝐡_i - 𝐡_j_2^2 | y_i ≠ y_j, s_i = s_j } 𝐡^e_i = _𝐡_j ∈ℍ{𝐡_i - 𝐡_j_2^2 | y_i = y_j, s_i ≠ s_j } where ℍ = {𝐡_i | v_i ∈𝒱} and we use L2 distance to find counterfactuals. A problem is that we only have limited labels in the training set. So we first pre-train the backbone model. With pre-trained model, we can obtain the prediction for unlabeled nodes as pseudo-labels. The pseudo-labels work as the guidance of the counterfactual searching problem. Note that for each factual input we can also get multiple counterfactuals by selecting a set of counterfactuals in Eq. (<ref>) and Eq. (<ref>) instead of one. Thus, the counterfactual 𝒢^c_i can be naturally extended to a set of K counterfactuals {𝒢^c_k_i|k=1,...,K} and 𝒢^e_i can be extended to {𝒢^e_k_i|k=1,...,K}. We fix K to 10 in our implementation. We can utilize the counterfactuals to supervise the disentanglement of 𝐜_i and 𝐞_i. Specifically, as shown in Figure <ref> (b), counterfactual 𝒢^e_k_i shares the same content information with factual graph 𝒢_i and has different environment information. Without supervision, the factual content representation 𝐜_i and the counterfactual content representation 𝐜^e_k_i may contain both the content information and environment information. When we minimize the discrepancy of the learned representations with dis(𝐜_i,𝐜^e_k_i), f_θ will tend to merely keep the content information and squeeze the sensitive information out of learned representations. In a similar manner, we can use dis(𝐞_i, 𝐞^c_k_i) to make the environment representation 𝐞_i be invariant to the content information stored in 𝐜_i. Also, we put the orthogonal constraint here to encourage 𝐜_i and 𝐞_i to store different information in representation space. The invariance constraint is given as: ℒ_inv=1/|𝒱| · K∑_v_i ∈𝒱∑_k = 1^K [ dis(𝐜_i, 𝐜^e_k_i) + dis(𝐞_i, 𝐞^c_k_i) + γ K· |cos(𝐜_i, 𝐞_i)| ] where dis(·, ·) is a distance metric, such as the cosine distance and L2 distance in our implementation. |cos(·, ·)| is the absolute value of cosine similarity and we optimize this term to approximate 𝐜_i^T 𝐞_i=0. γ is the hyper-parameter to control the orthogonal constraint. §.§ Final Objective Function of Putting the disentangled representation learning module and the counterfactual selection module together, the final objective function of the proposed framework is min_θ,ϕℒ= ℒ_pred + αℒ_inv + βℒ_suf , where θ and ϕ are parameters for the GNN encoder and the prediction head, respectively. α and β are hyper-parameters controlling the invariance constraint and the sufficiency constraint. §.§ Training Algorithm The whole process of is summarized in Algorithm <ref>. Our method relies on the counterfactuals in the representation space to guide the disentanglement. However, the randomly initialized representation at the first several epochs may degrade the performance of our model. Therefore, we first pre-train a plain node representation learning model 𝐘=g_Θ,Φ(𝐀, 𝐗) only with ℒ_pred. Then we use the optimized parameters Θ^*, Φ^*=min_Θ, Φℒ_pred to initialize the parameters θ and ϕ of our model and use the aforementioned framework to get the desired disentangled representations. We do not necessarily update the counterfactuals for each epoch. We update the counterfactuals once for t epochs and t=10 in our implementation. As shown in Algorithm <ref>, we first pre-train g_Θ, Φ and use the optimized parameter to initialize f_θ and ϕ from line 1 to line 2. Then we iteratively optimize f_θ and ϕ from line 3 to line 10. In each iteration, we first perform forward propagation to get node representations in line 4. And then for each t epoch we update the selected counterfactuals once from line 5 to line 7. Afterwards, we compute the overall objective and perform backpropagation to optimize the parameters θ and ϕ from line 8 to line 9. After training, we obtain the desired fair model f_θ and f_ϕ in line 11. § EXPERIMENTS In this section, we conduct experiments to evaluate the effectiveness of the proposed method and compare it with state-of-the-art fair GNNs. Specifically, we aim to answer the following questions: * (RQ 1) How effective is the proposed for fair node classification task on both synthetic datasets and real-world datasets? * (RQ 2) Can the proposed find appropriate counterfactuals? * (RQ 3) How do the proposed modules work? How can each regularization term affect the model performance? §.§ Experiment Settings §.§.§ Real-World Datasets We conduct experiments on three widely used real-world datasets, namely German Credit <cit.>, Credit Defaulter <cit.>, Bail <cit.>. The statistics of the datasets can be found in Table <ref>. These datasets contain sensitive attributes so that they can be used to evaluate fairness performance. The details of the datasets are as follows: * German Credit <cit.>: the nodes in the dataset are clients and two nodes are connected if they have high similarity of the credit accounts. The task is to classify the credit risk level as high or low with the sensitive attribute “gender”. * Credit Defaulter <cit.>: the nodes in the dataset are used to represent the credit card users and the edges are formed based on the similarity of the purchases and payments information. The task is to classify the default payment method with sensitive attribute “age”. * Bail <cit.>: these datasets contain defendants released on bail during 1990-2009 as nodes. The edges between two nodes are connected based on the similarity of past criminal records and demographics. The task is to classify whether defendants are on bail or not with the sensitive attribute "race". §.§.§ Synthetic Dataset Real-world datasets do not offer ground-truth counterfactuals, prompting us to construct a synthetic dataset based on the Structural Causal Model (SCM) as depicted in Figure <ref>. The primary advantage of a synthetic dataset is that it provides us with ground-truth counterfactuals for each node, which enables us to assess the quality of the obtained counterfactuals. In our approach, we consider settings with binary sensitive attributes and binary labels. A graph with 2000 nodes is sampled in our implementation. To generate the desired counterfactuals, we maintain the same sampled value of noise variables and use consistent causal relationships for each node. The sensitive attributes and labels are sampled from two different Bernoulli distributions, with s_i ∼ℬ(p) and y_i ∼ℬ(q), respectively. This results in generating vectors 𝐬_i = [(s_i)_× N] and 𝐲_i = [(y_i)_× N]. Next, environment and content features, 𝐞_i and 𝐜_i, are sampled from normal distributions 𝐞_i ∼𝒩(𝐬_i, 𝐈) and 𝐜_i ∼𝒩(𝐲_i, 𝐈), respectively. These features are combined to form the overall latent feature 𝐳_i = [𝐜_i , 𝐞_i]. The observed feature for each node v_i, denoted as 𝐱_i, is computed as 𝐱_i = 𝐖𝐳_i + 𝐛_i, where 𝐖_ij∼𝒩(1, 1), and 𝐖∈ℝ^d_2 × 2d_1, with 𝐛_i∼𝒩(0,𝐈) ∈ℝ^d_2. The adjacency matrix 𝐀 is defined such that 𝐀_ij = 1 if σ(cos(𝐳_i, 𝐳_j) + ϵ_ij) ⩾α and i ≠ j, with ϵ_ij∼𝒩(0,1), and 𝐀_ij = 0 otherwise. Here, σ(·) denotes the Sigmoid function, and the threshold α controls the edge number. We have the freedom to set sensitive attribute probability p, label probability q, latent feature dimension 2d_1, observed feature dimension d_2, node number N, and threshold α to control the biased graph generation process. Note that in the SCM we have C → Y instead of Y → C, thus a better way is to first generate content features and then assign labels to the features. Intuitively, we argue that when using an optimal classifier to deal with content features with different means will assign the same label in our generation process. Therefore, to simplify the generation process, we use C → Y in our dataset design. The synthetic dataset comes with notable advantages. Firstly, it gives us access to exact counterfactuals. After generating the initial graph, we keep all noise variables and unrelated variables unchanged, then adjust the sensitive attribute s_i or label y_i to calculate the precise counterfactual through the same graph generation procedure. Secondly, the synthetic dataset enables adjustable bias levels, providing us control over the extent of bias in our models. This adaptability allows us to match diverse real-world situations and robustly test our model's capability to manage various bias levels. As a result, we can undertake a comprehensive and detailed evaluation of our model's fairness and prediction quality. §.§.§ Baselines To evaluate the effectiveness of , we include representative and state-of-the-art methods, which can be categorized into three categories: (1) plain node classification methods: GCN <cit.>, GraphSAGE <cit.> and GIN <cit.>; (2) fair node classification methods: FairGNN <cit.>, EDITS <cit.>; (3) graph counterfactual fairness methods: NIFTY <cit.> and GEAR <cit.>. Unless otherwise specified, we use GraphSAGE as the model backbone except for baseline GCN and GIN. We use SAGE to denote GraphSAGE. The detailed descriptions about the datasets are as follows: * GCN <cit.>: GCN is a popular spectral GNN, which adopts a localized first-order approximation of spectral graph convolutions. * GraphSAGE <cit.>: GraphSAGE is a method for inductive learning that leverages node feature information to generate unsupervised embeddings for nodes in large graphs, even if they were not included in the initial training. * GIN <cit.>: Graph Isomorphism Network (GIN) is a graph-based neural network model that can capture different topological structures by injecting the node's identity into its aggregation function. * FairGNN <cit.>: FairGNN uses adversarial training to achieve fairness on graphs. It trains the learned representation via an adversary which is optimized to predict the sensitive attribute. * EDITS <cit.>: EDITS is a pre-processing method for fair graph learning. It aims to debias the input network to remove the sensitive information in the graph data. * NIFTY <cit.>: It simply performs a flipping on the sensitive attributes to get counterfactual data. It regularizes the model to be invariant to both factual and counterfactual data samples. * GEAR <cit.>: GEAR is a method for counterfactual fairness on graphs. It utilizes a variational auto-encoder to synthesize counterfactual samples to achieve counterfactual fairness for graphs. §.§.§ Evaluation Metrics We evaluate the model performance from three perspectives: classification performance, group fairness and counterfactual fairness. (i) For classification performance, we use AUC and the F1 score to measure node classification performance. (ii) For fairness, following <cit.>, we adopt two commonly used group fairness metrics, i.e., statistical parity (SP) Δ_S P and equal opportunity (EO) Δ_E O, which are computed as Δ_S P=|P(ŷ_u=1 | s=0)-P(ŷ_u=1 | s=1)| and Δ_E O=|P(ŷ_u=1 | y_u=1, s=0)-P(ŷ_u=1 | y_u=1, s=1)|. The smaller Δ_E O and Δ_E O are, the fairer the model is. (iii) For counterfactual fairness, as we have the ground-truth counterfactuals on the synthetic dataset, Following <cit.>, we use the counterfactual fairness metric δ_C F, i.e., δ_C F=|P((ŷ_i)_S ← s|𝐗, 𝐀)-P((ŷ_i)_S ← s^'|𝐗, 𝐀)|, where s, s^'∈{0,1}^N are the sensitive attributes and s^' = 1 - s. (ŷ_i)_S ← s^' is the computed ground-truth counterfactual label with the same data generation process as shown in Figure <ref>. We use subscript S ← s^' to denote counterfactual computation <cit.>, i.e., keeping the same data generation process and values of random noise variable. Counterfactual fairness of the graph is only measured on synthetic dataset. §.§.§ Setup For German Credit, Credit Defaulter and Bail , we follow train/valid/test split in <cit.>. For the constructed synthetic dataset, we use a 50/25/25 split for training/validation/testing data. We randomly initialize the parameters. For each combination of the hyper-parameters configuration, we run the experiments with 10 random seeds and grid search for the best configuration based on the performance on the validation set. Adam optimizer is used in our implementation <cit.>. §.§ Performance Comparison To answer RQ1, we conduct experiments on real-world datasets and synthetic dataset with comparison to baselines. §.§.§ Performance on Real-World Datasets Table <ref> shows the average performance with standard deviation of ten runs on real-world datasets. The best results are highlighted in bold and the runner-up results are underlined. From Table <ref>, we observe: * can improve the group fairness performance. Across three datasets, Table <ref> shows can make fairer predictions than other baseline methods. beats all the baselines with respect to the group fairness metrics. * There exists a trade-off between group fairness and prediction performance. Plain node classification methods, such as GCN, GraphSAGE and GIN, tend to have better prediction performance and worse group fairness performance. Fair node classification methods, including Fairness, EDITS, NIFTY, GEAR and , tend to suffer from a prediction performance drop and the group fairness performance is better. That shows the fairness methods tend to use less information. * achieves best performance on the prediction-fairness trade-off. We use the average rank of two prediction metrics and two group fairness metrics to know the performance of the trade-off. Our model ranks 1.75 and the runner-up model ranks 3.83. Our model outperforms the state-of-the-art node representation learning methods, which shows the effectiveness of our model. * Graph counterfactual fairness methods, such as NIFTY, GEAR and , achieved better performance than other baselines. Correlation-based counterfactual notions can capture the causal relationships and help to boost the group fairness performance. §.§.§ Performance on Synthetic Dataset Figure <ref> reports the performance on the synthetic dataset. On the synthetic dataset, we have the desired ground-truth counterfactuals, which can be used to measure the performance of graph counterfactual fairness. We compare our model with plain node classification models and counterfactual fairness models. The observations are as follows: * beats all the models with respect to the prediction, group fairness and counterfactual fairness metrics. We argue that in our assumed biased generation process, our model can effectively find invariant, sufficient and informative presentations to make accurate and fair predictions. * Other graph counterfactual fairness-based methods, including NIFTY and GEAR, cannot consistently outperform other methods. These methods design their model without considering meaningful causal relationships. NIFTY simply perturbs the sensitive attribute and omits the further influence on features and graph structure. GEAR adopts an ill-supervised GraphVAE to help model the causal relationships, which may fail to generate meaningful counterfactuals. §.§ Flexibility of for Various Backbones To show the flexibility of in improving the fairness of various backbones while maintaining high classification accuracy, other than GraphSAGE, we also plug our model in GCN and GIN. Figure <ref> shows the classification performance and fairness performance on Bail and Credit. From Figure <ref>, we observe that compared with the backbones, can significantly improve the fairness with no or marginal decrease in classification performance. For example, on Bail dataset, the prediction performance of our model with GIN backbone drops by 0.54% on AUROC but the Δ_SP drops by 1.37% and the Δ_EO drops by 0.86%, which is an improvement on fairness performance. This demonstrates the flexibility of in benefiting various backbones. §.§ Quality of Counterfactuals To answer RQ2, we compare the counterfactuals obtained by with ground-truth counterfactuals to investigate whether we can obtain the desired counterfactuals. We conduct experiments on the synthetic dataset which has ground-truth counterfactuals. We first use to obtain counterfactuals. To measure the discrepancy of the obtained counterfactuals with respect to the feature and structure in the ego graph, we compare the learned counterfactual representations and the ground-truth counterfactual representations. We compare our model with two graph counterfactual fairness baselines, i.e., NIFTY <cit.> and GEAR <cit.>. NIFTY simply flips the sensitive attributes to get their counterfactuals. GEAR uses a GraphVAE to generate the counterfactuals based on self-perturbation and neighbor-perturbation. Figure <ref> shows the average result for all the nodes on the synthetic dataset. We show that can find better counterfactuals than other graph counterfactual fairness models, i.e., smaller discrepancy to ground-truth counterfactuals. The result also shows there is still space for existing methods to improve the performance of getting appropriate counterfactuals. §.§ Ablation Study In our model, the pre-trained model can provide pseudo-labels for the nodes in the unlabeled set. Thus, we can select counterfactuals from the entire dataset. The model trained from scratch, without any pre-training, is denoted as -NP. Without pseudo-labels, we can only select counterfactuals from the training set, which is denoted as the variant -NS. We evaluate the performance on synthetic dataset. The results are reported in Table <ref>. We find the model -NS performs worse than the but better than -NP. The result shows the pseudo-labels can also boost the performance of our model. Usually, the training set is small and the model may not obtain desired counterfactuals from the limited data points. Although pseudo-labels may contain some noisy information, they can also help improve the our model performance. We further delve into how the constraints impact performance. When merely setting α=0 or β=0, we denote the model as -NA and -NB, respectively. The models -NA and -NB outperform SAGE, yet fall short when compared to . This indicates that both the sufficiency and invariance constraints collectively contribute to the superior performance of our model. §.§ Hyper-Parameter Sensitivity Analysis There are two important hyperparameters in , i.e., α and β. α controls the contribution of the invariance regularization ℒ_inv and β controls the contribution of the sufficiency regularization. To understand the impact of α and β on , we fix β as 5 and vary α as {0, 1,…, 18}. Similarly, we fix α as 1 and vary β as {0, 1,…, 18}. We report the result on German dataset in Figure <ref>. From Figure <ref>, we have the following observations: there exists a trade-off between prediction performance and fairness performance. The trend is that when we increase the α and β, we will get worse prediction performance and better fairness performance. We argue that without these regularizations, the model may rely on sensitive information to make the prediction. When we decrease the regularization budget, we can disentangle content representations and squeeze the sensitive information out. Thus, the prediction performance will get worse and the fairness performance will be better. § CONCLUSION AND FUTURE WORK In this paper, we study the problem of learning fair node representations with GNNs. We first formalize the biased graph generation process with an SCM. Motivated by causal theory, we propose a novel framework to learn fair node presentations which meet graph counterfactual fairness criteria and can achieve good prediction-fairness performance. Specifically, we align the model design with the data generation process and convert the problem to learn content representations in the causal graph. We derive several properties of the optimal content representation from the causal graph, i.e., invariance, sufficiency and informativeness. To get appropriate supervision for the invariance regularization, we design a counterfactual selection module. Extensive experiments demonstrate that can achieve state-of-the-art performance on synthetic dataset and real-world datasets with respect to the prediction-fairness trade-off. There are several interesting directions worth exploring. First, in this paper, we mainly focus on binary classification and binary sensitive attribute. We will extend the work to multi-class classification and multi-category sensitive attributes. Second, in this paper, we focus on static graphs while there are many different kinds of graphs in real-world. Thus, we aim to extend our model to more complex graph learning settings, such as dynamic graphs, multi-value sensitive attributes and labels. ACM-Reference-Format
http://arxiv.org/abs/2307.04746v1
20230710175344
Classical Observables from the Exponential Representation of the Gravitational S-Matrix
[ "Poul H. Damgaard", "Elias Roos Hansen", "Ludovic Planté", "Pierre Vanhove" ]
hep-th
[ "hep-th", "gr-qc" ]
label.distance1pt 𝒩 ℳ #1·^·^·^#1 _0ℳ _1ℳ [#1,#2][#1 #2] [#1,#2]⟨#1 #2⟩ [#1,#2]⟨#1 #2⟩ [#1,#2,#3]⟨#1|#2|#3] [#1,#2,#3][#1|#2|#3⟩ [#1,#2][#1 #2] #1.#2⟨#1 #2⟩ [#1,#2,#3]⟨#1|#2|#3] [#1,#2,#3][#1|#2|#3⟩ #1.#2[#1 #2] (#1,#2,#3,#4)tr_-[ #1 #2 #3 #4] (#1,#2,#3,#4,#5,#6)tr_-[ #1 #2 #3 #4 #5 #6] (#1,#2,#3,#4,#5,#6,#7,#8)tr_-[ #1 #2 #3 #4 #5 #6 #7 #8] (#1,#2)(#1·#2) .9◃-.1em -.1em.9▹ -.1em .9⋈-.1em -.1em.9⋈ [#1,#2](ε_#1· k_#2)[#1,#2](ε_#1·ε_#2)(#1,#2)(ℓ_#1·ℓ_#2)ϵŁΛłλϵωαβ̱þθ bg bg,main shapes.miscarrowsdecorations.markingspositioning,fitpatternsdecorations.pathmorphingsnake it/.style=decorate, decoration=snakecross/.style=cross out, draw=black, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt, cross/.default=2ptCERN-TH-2023-135, IPhT-T23/041, LAPTh-029/23a,b] Poul H. Damgaarda], Elias Roos Hansena], Ludovic Plantéc], Pierre Vanhove[a]Niels Bohr International Academy, Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark[b]Theoretical Physics Department, CERN, 1211 Geneva 23, Switzerland[c]Institut de Physique Theorique, Université Paris-Saclay, CEA, CNRS, F-91191 Gif-sur-Yvette Cedex, France By combining the KMOC-formalism with the exponential representation of the scattering matrix we show that the two-body scattering angle is given by the corresponding matrix element of the exponential representation. This holds to all orders in the Post-Minkowskian expansion of gravity when restricted to the conservative sector. Once gravitational radiation is taken into account new terms correcting this relationship appear starting at fourth Post-Minkowskian order. A systematic expansion of the momentum kick is provided to any order, thus illustrating the iterative structure that partly recycles terms from lower orders in the Post-Minkowskian expansion. We provide explicit results for this computation to fourth Post-Minkowskian order, the first complete calculation at this order based on scattering amplitudes.Classical Observables from the Exponential Representation of the Gravitational S-Matrix [ August 12, 2023 ======================================================================================== § INTRODUCTION While the Post-Minkowskian expansion of general relativity <cit.> has been highly successful in solving the relativistic two-body problem by means of modern amplitude techniques, new and puzzling features seem to appear at every new order considered. The second-order Post-Minkowskian solution of Westpfahl <cit.> was easily reproduced by amplitude methods <cit.> but already the first solution to third Post-Minkowskian order <cit.> displayed an unphysical divergence in the scattering angle that could not be understood within the conservative framework used. The resolution was to be found when including radiation reaction of the gravitational field <cit.>. Remarkably, soft gravitons cancelled the unwanted divergence in the scattering angle, thereby reproducing the classic result of Amati, Ciafaloni, and Veneziano <cit.>. Moreover, to this third Post-Minkowskian order a standard quantum field theoretic evaluation of the full classical part of the gravitational two-to-two scattering amplitude precisely yields the correct scattering angle <cit.>, the simple resolution being found in the need to include all classical pieces from the two-loop scattering amplitude. As explained in the latter two references, those classical parts can be systematically identified through the so-called velocity cuts of the scattering amplitude: delta-function contributions that emerge from combinations of propagators with the Feynman iϵ-prescription. For reviews of these ideas see, e.g., ref. <cit.>. Among the many lessons learned at that third Post-Minkowskian order has been the need to understand how to subtract terms that diverge in the classical limit in order to yield unambiguously those parts of the scattering amplitude that remain finite when ħ→ 0. These delicate cancellations have their root in the conventional use of the Born expansion of quantum field theory. Parametrizing the S-matrix as Ŝ = 1 + iT̂/ħ, unitarity of Ŝ leads to the optical theorem through T̂ - T̂^† = i/ħT̂T̂^† . This relation shows how the perturbative expansion of the T-matrix to any given order in the coupling constant cross-talks with lower-order terms and parts of those will have increasingly higher inverse powers of ħ. This is the origin of the eikonal exponentiation in impact parameter space <cit.>. It is also the origin of the need to introduce the well-known Born subtractions, whether implemented by effective field theory methods <cit.> or, equivalently, by solving the Lippmann-Schwinger equation associated with the corresponding relativistic Hamiltonian <cit.>. Inspired by the different subtraction scheme behind the calculation of the conservative part to fourth Post-Minkowskian order of ref. <cit.>, an alternative representation of the S-matrix was suggested in ref. <cit.>. In this representation, an Hermitian scattering matrix, denoted N, is introduced through the operator identification Ŝ = exp[iN̂/ħ]  . It was conjectured in ref. <cit.> that two-to-two matrix elements of the operator N̂, after a transform to impact-parameter space, yields the radial action and hence, by simple differentiation, also the scattering angle. This was verified explicitly to third Post-Minkowskian order <cit.> and later checked, in the probe limit, up to fifth Post-Minkowskian order <cit.>. More recently, the exponential representation has also been checked against the fourth Post-Minkowskian order calculation of ref. <cit.> for arbitrary masses <cit.> but not including all radiation effects. There is thus substantial evidence that the exponential representation of the gravitational S-matrix captures the classical dynamics of the conservative sector (and even parts of radiative effects) but a proof has so far still been lacking. One purpose of this paper is to provide such a proof. Matrix elements of the exponential representation of the S-matrix resemble, after transforming to impact parameter space, the quantum field theoretic eikonal <cit.>. We stress, however, that these two representations are quite distinct beyond leading order. The N̂-operator encapsulates by construction the semi-classical limit of the S-matrix and its two-to-two matrix element is therefore expected to yield the corresponding radial action. Because N̂ is already in the exponent there are no superclassical contributions to it and all corrections to the radial action will be of quantum mechanical origin (and therefore not of interest here). The N̂-operator is thus more closely related to the WKB approximation than to the eikonal[For a recent comprehensive review of the eikonal formalism, see ref. <cit.>.]. Two other formalisms will be central to the understanding of gravitational two-body scattering in the Post-Minkowskian expansion. One is the KMOC formalism <cit.>, the other is the Post-Minkowskian worldline formalism <cit.>. The KMOC framework is, after appropriate reductions to the point-particle limit, intimately related to the amplitude approach to gravitational scattering. Indeed some of the first resolutions of the puzzles at third Post-Minkowskian order came from expressing KMOC observable in the form of cut amplitudes by reverse unitarity <cit.>. The worldline approach differs conceptually in that the classical limit ħ→ 0 can be taken from the outset, thus eliminating the need for subtractions altogether. In the end, the resulting integrals that must be evaluated are nevertheless very similar and they are, not surprisingly, very closely related to the integrals that need to evaluated in the amplitude-based approach. It becomes particularly clear in terms of the velocity cut method where the correspondence up to third Post-Minkowskian order has been shown to be one-to-one <cit.>. This is not surprising in view of the fact that both formalisms amount to solving the classical Einstein field equations by Green function methods. New issues have appeared at fourth Post-Minkowskian order of the gravitational expansion. These are related to both angular momentum loss and energy loss during the scattering process, losses which are due to the gravitationally radiated angular momentum and energy <cit.>. There has been much progress on how to incorporate these effects in the eikonal formalism <cit.> but so far a complete computation has only been reported in work using the worldline formalism <cit.>. In order to tackle dissipation at this order, the worldline calculations have been rephrased in terms of the closed time paths of the Schwinger-Keldysh kind <cit.>. This leads to a doubling of degrees of freedom, the use of retarded (or advanced) propagators, and in general a much larger set of master integrals due to less symmetry of the integrands. It is interesting to contrast this with the KMOC formalism which provides S-matrix expressions for the same quantities but based on standard amplitudes with Feynman propagators. In a recent paper <cit.> we have demonstrated the equivalence between the KMOC and worldline formulations in the classical limit. While this non-trivial relationship has been established on general grounds, it is interesting that dissipative effects are accounted for quite differently in the two formulations due to the difference between Feynman and retarded/advanced propagators. In this paper we combine the KMOC-formalism with the exponential representation of the S-matrix. We shall argue that such a combination is more economical than the conventional one based on the linear T-matrix representation of the S-matrix. It leads to very compact formulas for classical observables in gravity based on amplitudes and it clarifies the inclusion of radiative effects in a simple diagrammatic fashion. Importantly, because the KMOC formalism makes no distinction between conservative and dissipative contributions, classical observables are extracted in a universal manner from the matrix elements of the N̂-operator by retaining all classical pieces. As in the full amplitude computation at third Post-Minkowskian order <cit.> there is no need to separate different contributions. At any order in the expansion one only has to extract all classical terms of the matrix elements of N̂ and derived quantities thereof. While equivalent to the worldline formulation in the Keldysh-Schwinger path integral, the formulas we shall present here have a structure that is straightforward to implement in terms of moden amplitude methods. Having different consistent formulations available is clearly an advantage and there is now a variety of approaches available for the Post-Minkowskian expansion (see also refs.. <cit.>). This is particularly important when the Post-Minkowskian expansion enters the new uncharted territory of higher orders. We shall illustrate the simplicity of the combination of the N̂-operator with the KMOC formalism by computing the full momentum kick (and hence scattering angle) to fourth Post-Minkowskian order. As we shall show, the required basis of master integrals is significantly smaller than that used in refs. <cit.> due to the fact that we need only use Feynman propagators. Nevertheless, our results agree. § THE EXPONENTIAL REPRESENTATION OF THE GRAVITATIONAL S-MATRIX In this section we briefly review the exponential operator representation of the S-matrix. We first fix conventions. We consider the Einstein-Hilbert action of two massive scalars (of masses m_1 and m_2) coupled to gravity, S_EH = ∫ d^4 x √(-g)[R/16 π G + 1/2∂_μϕ_1∂^μϕ_1 +1/2∂_μϕ_2∂^μϕ_2- m_1^22ϕ_1^2 - m_2^22ϕ_2^2] . The Newtonian coupling is denoted by G and R is the Ricci scalar. We use a mostly-minus metric with flat Minkowski space at infinity, diag η_μν≡(1,-1,-1,-1), and expand the full metric as g_μν(x)≡η_μν + √(32 π G)h_μν(x). [scale=0.6] [black,very thick] [->] (-4,-1) to (4,-1); [black,very thick] [->] (-4,1) to (4,1); (-4,-1) node[above]p_1, m_1; (4,-1) node[above]p'_1, m_1; (-4,1) node[above]p_2, m_2; (4,1) node[above]p'_2, m_2; [color = black, fill=gray, very thick] (0,0) circle (2cm); In this section we write everything in the standard language of in-out states and consider the two-to-two scattering with p_1 and p_2 denoting incoming momenta and p_1' and p_2' outgoing momenta with p_1^2 = p_1'^2 = m_1^2 and p_2^2 = p_2'^2 = m_2^2. In the centre-of-mass frame with p_1=(E_1(p),p⃗), p_2=(E_2(p),-p⃗) we have (p_1+p_2)^2 = (p_1'+p_2')^2 = m_1^2+m_2^2+2m_1 m_2 γ, γ≡p_1 · p_2/m_1 m_2 , (p_1-p_1')^2 = (p_2'-p_2)^2≡ q^2=-q⃗^ 2 , In ordinary scattering theory we wish to compute S-matrix elements. Here, instead, we shall focus on matrix elements of the Hermitian operator N̂ defined by eq. (<ref>), in particular, for two-to-two scattering, N(γ,q^2)=⟨ p_1',p_2' |N̂| p_1,p_2⟩ . This should be contrasted with the standard Born expansion of the S-matrix based on Ŝ=1 + i/ħT̂ and the usual scattering amplitude M(p_1,p_2,p_1',p_2') defined by ⟨ p_1',p_2'| T̂ | p_1, p_2 ⟩ =  (2πħ)^Dδ^(D)(p_1+p_2-p_1'-p_2') M(p_1,p_2,p_1',p_2')  , in dimensions D=4-2ϵ. As detailed in ref. <cit.> it is straightforward to expand the exponential representation and derive the infinite sequence of relations between operators N̂ and T̂ in perturbation theory. In the two-to-two sector the operators have perturbative expansions that we can write compactly as T̂ = GT̂_0 + G^3/2T̂_0^ rad + G^2T̂_1 + G^5/2T̂_1^ rad + G^3T̂_2 + ⋯N̂ = GN̂_0 + G^3/2N̂_0^ rad + G^2N̂_1 + G^5/2N̂_1^ rad + G^3N̂_2 + ⋯ from which we straightforwardly can solve for G's in terms of T's by expanding the exponential. Integer powers of G describe interactions with an even number of graviton vertices while half-integer powers describe interactions with an odd number of gravitons. The separation of operators with superscript rad refers only to the associated half-integer power of G. We find it useful diagrammatically to make this distinction (see also below) but it has no further meaning beyond this. There are clearly also radiative terms in the even powers. At order G^4 the relation reads N̂_3 = T̂_3 - i/2ħ(N̂^ rad_1N̂^ rad_0+N̂^ rad_0N̂^ rad_1)- i/2ħT̂_1^2 - i/2ħ(T̂_0 T̂_2+T̂_2 T̂_0) - 1/12ħ^2 [N̂_0^rad,[N̂_0^rad,N̂_0]] - 1/3ħ^2(T̂_0^2 T̂_1+T̂_0 T̂_1T̂_0 + T̂_1T̂_0^2) + i/4 ħ^3T̂_0^4 . and it is elementary to generalize this to higher orders. Note that we have combined some of the T-matrices into N-matrices on the right hand side, thus making the cancellation among superclassical pieces associated with those manifest. This also aids in understanding the separation into real and imaginary parts. We remind that N̂ is Hermitian so that two-to-two scalar matrix elements of that operator are real. The obvious way to evaluate matrix elements of the N̂ operator by conventional field theory methods is to insert a complete set of momentum eigenstates between all products of T-matrices and truncate to the desired order in G. Then matrix elements can be evaluated by standard Feynman rules of scattering theory. Here the complete set of states is spanned by two massive scalar particles: one of momentum k_1 and mass m_1, the other of momentum k_2 and mass m_2, together with any number n of massless gravitons. We denote such states by |k_1,k_2;ℓ_1,…,ℓ_n⟩. These states are normalized relativistically according to ⟨ k_1,k_2;ℓ_1,…,ℓ_n|k'_1,k'_2;ℓ'_1,…,ℓ'_m⟩ = δ_n,m∏_i=1^2 2E_k_i (2πħ)^D-1δ^(D-1)(k_i-k'_i)×∏_i=1^n 2E_ℓ_i (2πħ)^D-1δ^(D-1)(ℓ_i-ℓ_i') , and the completeness relation is given by 1 = ∑_n=0^∞1/n!∫∏_i=1^2 dΠ_k_i∏_r=1^n dΠ_ℓ_r |k_1,k_2;ℓ_1,…,ℓ_n⟩⟨ k_1, k_2; ℓ_1, …ℓ_n |. including a sum over graviton helicities. Here dΠ is the standard Lorentz invariant phase space measure, i.e., dΠ_k_i= d^Dk_i/(2πħ)^D-1δ^+((k_i)^2-m_i^2) = d^Dk_i/(2πħ)^D-1θ(k_i^0)δ((k_i)^2-m_i^2) for i=1,2 for the massive states, and similarly for the massless gravitons. We now insert the completeness relation between all operator products to get the three-loop relation between matrix elements of the N̂ and the T̂ operators ⟨ p_1',p_2'| N̂_3 |p_1,p_2⟩ = ⟨ p_1',p_2'| T̂_3 |p_1,p_2⟩+L_0+L_1+L_2 which we then expand in powers of G. Keeping track of this overall power of G, we can view it as an expansion in the number of gravitons connecting the operators. First, with just the massive states inserted, L_0= -i/2[scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_2; -i/2[scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_2; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; -i/2[scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_1; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_1; +i/4[scale=0.5] [black,very thick] [->] (-8,-.8) to (8,-.8); [black,very thick] [->] (-8,.8) to (8,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (4,-1) to (4,-.6); [red] (4,1) to (4,.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (6,0) circle (1cm) node T_0; -1/3[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_1; -1/3[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_1; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; -1/3[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [color = black, fill=white, very thick] (-6,0) circle (1cm) node T_1; [color = black, fill=white, very thick] (-2,0) circle (1cm) node T_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node T_0; Next, with the inclusion of one graviton, L_1= -i/2[scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_1^ rad; -i/2[scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_1^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; -1/12[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0; + 1/6 [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-2,1.5) to (2,1); [black,snake it] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; -1/12[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-1.5,0) to (1.5,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; as well as one graviton inserted twice: L_2=1/6 [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] (-8,.8) to (-6,.8); [black,very thick] [->] (2,.8) to (4,.8); [black,snake it] [->] (-6,.8) to (2,.8); [black,very thick] (-2,1.5) to (2,1); [black, very thick] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; +1/6 [scale=0.5] [black,very thick] [->] (-8,.8) to (4,.8); [black,very thick] (-8,-.8) to (-6,-.8); [black,very thick] [->] (2,-.8) to (4,-.8); [black,snake it] [->] (-6,-.8) to (2,-.8); [black,very thick] (-2,-1.5) to (2,-1); [black, very thick] (-2,-1.5) to (-6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-1.8) to (-2,-1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; Note that the completeness relation enforces the inclusion of graph topologies that are partly disconnected, such as the graviton line skipping one internal operator as well as the Compton-type contributions in the last line where scalars skip an internal operator. Such intermediate states begin to contribute for the first time at fourth Post-Minkowskian order because up to and including third Post-Minkowskian order they have no support on physical kinematics. To fourth order in G no further insertions of graviton states are possible when evaluating N-matrix elements through use of eq. (<ref>). Although written as an apparent expansion in 1/ħ one must keep in mind that additional factors of ħ (of both positive and negative powers) arise when computing matrix elements. Since matrix elements of the N̂ are manifestly free of superclassical contributions, the subtractions on the right hand side of eq. (<ref>) ensure cancellations among all superclassical terms arising from the T̂-matrix, here including order 1/ħ^3-terms. We shall show in section <ref> how this implies the cancellation of the superclassical terms when evaluating observables in the KMOC formalism. One advantage of the exponential representation is that we can ignore these superclassical cancellations that are guaranteed to occur anyway and thus focus exclusively on the pieces that have a well-defined ħ→ 0 limit. The systematic way to extract this classical limit of matrix elements of the N̂-operator is by means of velocity cuts. This will be described next. §.§ The classical limit and velocity cuts The notion of velocity cuts <cit.> is computationally useful for extracting the classical limit. The basic idea is to combine massive propagator lines in pairs, each having denominators that are linear in the external momenta but with opposite signs, thus effectively reducing to delta-function constraints that are linear in momenta. Ignoring soft momentum corrections, this puts the massive lines on-shell and removes one momentum integration, thus enforcing the first link to the classical worldline formalism. The classical limit ħ→0 of the massive amplitude is obtained by scaling the momentum transfer q=ħq with q fixed, and scaling the loop integration momenta ℓ_i=ħ|q| ℓ̅_i. The amplitude will involve two massive propagators, 1(ℓ+p_r)^2-m_r^2+iε =1 2ℓ· p_r+ℓ^2+iε r=1,2 where ℓ is a generic loop momentum. In the classical limit we have 1 2 ħ |q| ℓ· p_r+ ħ^2|q|^2 ℓ^2+iε≃12 ħ |q| 1ℓ· p_r+ iε, so that the ℓ^2 part is subleading and the massive propagators effectively become linear. Combinations of such linear propagators using lim_ε→0( 1 2ℓ· p_r+ℓ^2+iε + 1 2ℓ· p_r-ℓ^2+iε) =-2iπδ(2ℓ· p_r) lead to δ-function insertions in the loops. The higher order O(ħ^2q^2) pieces do not contribute to the classical result, thus eventually making the link to the classical worldline formalism, as we shall discuss below. The classical part of the massive two-to-two amplitude at L-loop order has exactly L velocity cuts <cit.>. Therefore the classical amplitude can be reduced on a special class of Post-Minkowskian master integrals with L such delta-function insertions. This set of master integrals also arises from the worldline formalism <cit.> as explained at two-loop order in ref. <cit.>. An alternative approach is based on the heavy-mass expansion of scattering amplitudes <cit.>. The classical terms can be re-organized in terms of a heavy mass expansion rather than as the ħ→ 0 viewpoint taken here. The result is an effective field theory of linearized massive propagators and, in loops, precisely corresponding to the velocity cuts <cit.>. § THE KMOC FORMALISM AND THE EXPONENTIAL REPRESENTATION The KMOC formalism as originally defined in <cit.> considers an initial in-state of two massive scalars at time t=-∞, .|in⟩ = ∫dΠ_p_1dΠ_p_2Φ̃_1(p_1) Φ̃_2 (p_2)e^i/ħbp_1.|p_1,p_2;0⟩ where the state .|p_1,p_2;0⟩ is a momentum eigenstate of two massive scalars and the “0” indicates that there is no radiation present at t = -∞. In the classical limit the wavefunctions Φ̃(p_i) are chosen so as to represent two localized scalars separated by impact parameter b^μ. A complete set of states containing an arbitrary number of gravitons is as described in eq. (<ref>) but the initial state at t=-∞ is taken to be free of gravitons, as shown. A change in an observable corresponding to an operator Ô from t = -∞ to t = +∞ is then <cit.>, ⟨ΔÔ⟩ = ⟨in|Ŝ^†ÔŜ|in⟩ - ⟨in|Ô|in⟩ = ⟨in|Ŝ^† [Ô,Ŝ]|in⟩ . Using the linear Born representation of the S-matrix (<ref>) leads to the KMOC formula ⟨ΔÔ⟩ =iħ⟨in|[Ô,T̂]|in⟩ +1ħ^2⟨in|T̂^†[Ô,T̂]|in⟩ In the small ħ limit this expression leads to the evaluation of the change in a classical observable after the delicate cancellations of superclassical terms. Here we instead explore consequences of using the exponential representation of the S-matrix. This will lead to a simple and efficient way to extract the change in a classical observable, including dissipative effects. In an alternative viewpoint we consider the change ΔÔ of an operator Ô from t=-∞ to t=+∞ as ΔÔ=Ŝ^†ÔŜ- Ô . which then has to be evaluated between in-states of t=-∞. Inserting the exponential representation of the Ŝ operator of eq. (<ref>) together with the crucial property of Hermiticity of N̂, ΔÔ=e^-iN̂ħÔe^iN̂ħ - Ô . allows us to rewrite eq. (<ref>) by means of the Campbell identity that expands the two exponentials as an infinite sum of nested commutators, ΔÔ=∑_n ≥ 1(-i)^n/ħ^nn![N̂,[N̂,…,[N̂,Ô]]]_n times. This rewriting, which is where we use unitarity of the S-matrix, will play a crucial role in our all-order proofs because it displays the iterative structure of the KMOC formalism when combined with the exponential representation. It is convenient to define Â_n^Ô≡1/ħ^n[N̂,[N̂,…,[N̂,Ô]]]_n times . The nested commutator structure implies the operator relation Â_n^Ô=Â_1^Â_n-1^Ô=Â_1^Â_1^Â_1^Ô. Importantly, when we evaluate matrix elements by means of insertions of complete sets of states, this iterative structure is preserved (since all we do is to insert factors of unity). Repeating the steps described in ref. <cit.>, we can insert the above expression in the KMOC-expression and take the limit of localized massive states. The result is ⟨ΔÔ⟩ (p_1,p_2,b)= ∫d^Dq/(2π)^D-2δ(2p_1· q - q^2)δ(2p_2· q + q^2)e^ib· qħ⟨ p_1'p_2' | Δ O | p_1 p_2⟩ where p_1' = p_1 - q and p_2' = p_2 + q. In this form it is clear that a first step is the evaluation of the matrix element ⟨ p_1'p_2' | Δ O | p_1 p_2⟩, followed by the shown Fourier transform to b-space. One noticeable feature of the KMOC-formalism for (non-spinning) black-hole scattering is that it always entails the evaluation of matrix elements of an operator (<ref>) between two-particle scalar states. For an observable corresponding to an Hermitian operator Ô the corresponding Δ O is clearly Hermitian as well. Two-particle scalar matrix elements of this Δ O are then real, as follows from time-reversal symmetry. The reality of the expectation value is preserved by the insertion of the completeness relation since it just amounts to the insertion of factors of unity. §.§ Cancellation of superclassical terms: the conservative sector In this section we first show how the N-operator formalism provides a simple way to demonstrate the cancellation of the superclassical pieces when restricted to the conservative sector. We next give a general formula valid to all orders in G for a scalar operator in section <ref> and a vector operator in section <ref>. The application to the momentum kick Δ P_1 is pursued in section <ref>. §.§.§ The classical limit We start with a scalar operator Ô and consider the term with n=1 in (<ref>) 𝒜_1^O(p_1,p_2,q)=1ħ⟨ p_1',p_2'| [N̂, Ô]|p_1,p_2⟩ and we first analyze the conservative case where gravitons are not included in the set of inserted on-shell states. This is graphically represented as 𝒜_1^O(p_1,p_2,q)|^ cons.= [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node O; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node O; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; where the red line indicates where we insert the intermediate two-particle state, corresponding to 𝒜_1^O(p_1,p_2,q)= 1/ħ∫ dΠ_q_1 dΠ_q_2(⟨ p_1',p_2'| N̂|q_1,q_2⟩⟨ q_1,q_2 |Ô|p_1,p_2⟩- ⟨ p_1',p_2'| Ô|q_1,q_2⟩⟨ q_1,q_2 |N̂|p_1,p_2⟩). It is convenient to factor out overall energy-momentum conservation and write ⟨ p_1',p_2'| N̂|p_1,p_2⟩= N(γ ,q^2) (2πħ)^D δ(p_1'+p'_2-p_1-p_2) and ⟨ p_1',p_2'| Ô|p_1,p_2⟩= O(p_1',p_2',q) (2πħ)^D δ(p'_1+p'_2-p_1-p_2). We can use one of the energy-momentum conservation delta-functions to remove integration variable q_2 After defining k_1=q_1-p_1 and using the scaled momenta q and k_1 such that p_1'=p_1-q=p_1-ħq, p_2'=p_2+q=p_2+ħq we change variables to get 𝒜_1^O(p_1,p_2,q)=ħ∫ d^D k_1/(2π)^D-2δ^+((p_1+ħk_1)^2-m_1^2)δ^+((p_2-ħk̅_1)^2-m_2^2)×(N(γ,ħ^2 (k_1+q)^2) O(p_1,p_2,-ħk_1) - O(p_1+ħk_1,p_2-ħk_1,ħ(q+k_1))N(γ,ħ^2 k_1^2) )× (2πħ)^D δ(p_1+p_2-p_1'-p_2'). Setting 𝒜_1^O(p_1,p_2,q)= A_1^O(p_1,p_2,q) (2πħ)^D δ(p_1+p_2-p_1'-p_2'). Changing variables k_1 → -k_1-q to the second term of the sum gives A_1^O(p_1,p_2,q)=ħ∫ d^D k_1/(2π)^D-2 N(γ,ħ^2 (k_1+q)^2) O(p_1,p_2,-ħk_1)δ^+((p_1+ħk_1)^2-m_1^2)δ^+((p_2-ħk_1)^2-m_2^2) - ħ∫ d^D k_1/(2π)^D-2 O(p_1-ħ (k_1+q),p_2+ħ (k_1+q),-ħk_1)N(γ,ħ^2 (k_1+q)^2) ×δ^+((p_1-ħ (k_1+q))^2-m_1^2)δ^+((p_2+ħ(k_1+q))^2-m_2^2). Doing the small ħ expansion of the integrand leads to O(p_1,p_2,-ħk_1)δ^+((p_1+ħk_1)^2-m_1^2)δ^+((p_2-ħk_1)^2-m_2^2) -O(p_1-ħ (k_1+q),p_2+ħ (k_1+q),-ħk_1)δ^+((p_1-ħ (k_1+q))^2-m_1^2)δ^+((p_2+ħ(k_1+q))^2-m_2^2) =2/ħ ((k_1+q) ·k_1)O(p_1,p_2,-ħk_1)((δ^+)'(2 p_1 ·k_1)δ^+(-2 p_2 ·k_1)+δ^+(2 p_1 ·k_1)(δ^+)'(-2 p_2 ·k_1)) +1/ħ (k_1^μ+q^μ)(∇^μ O(p_1,p_2,-ħk_1))δ^+(2 p_1 ·k_1)δ^+(-2 p_2 ·k_1). where we have introduced the derivative ∇_μ [ℱ]≡∂ℱ/∂ p_1^μ-∂ℱ/∂ p_2^μ. Consequently the ħ expansion of A_1^O takes the form A_1^O(p_1,p_2,q)=∫d^D k_1/(2π)^D-2 N(γ,ħ^2 (k_1+q)^2)× (k_1^μ+q^μ)∇_μ(O(p_1,p_2,-ħk_1))δ^+(2 p_1 ·k_1)δ^+(-2 p_2 ·k_1))+𝒪(ħ) Here, crucially, N(γ,ħ^2 (k_1+q)^2) by construction has only classical and quantum parts. This means that for classical observables O the matrix element A_1^O will have a leading piece which is classical, followed by quantum corrections. There are no superclassical pieces in A_1^O. By recursion it follows that this holds for A_n^O and any n as well. Although the completeness relation has a positive energy constraint, this is automatically satisfied in the classical limit for the massive scalars of positive energy, δ^+((p_1-ħk_1)^2-m_1^2)=θ(p_1^0-ħk_1^0) δ((p_1-ħk_1)^2-m_1^2)≃θ(p_1^0) δ(-2 ħ p_1·k_1)^2) . To conclude, we have shown that the classical piece of A_1^O is given by A_1^O(p_1,p_2,q) = ∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1^μ+q^μ)∇_μ [O(p_1,p_2,- k_1))δ(2 p_1· k_1)δ(-2 p_2· k_1)] after setting ħ =1. Not that this is an all-order statement in G. Iterating, it follows that all higher commutators and hence also the full expectation value are free of superclassical pieces when evaluated in the conservative sector. §.§.§ Vector operators Let us now consider the application of the general iterative formula of eq. (<ref>) to a special class of four-vector operators O^μ(p_1,p_2,q)=⟨ p_1',p_2'| Ô^μ |p_1,p_2⟩ that decompose into longitudinal O_∥(γ,q^2) and transverse O_⊥(γ,q^2) parts as follows: O^ν(p_1,p_2,q) = O_∥((p_1+p_2)^2,q^2)L^ν + O_⊥((p_1+p_2)^2,q^2) q^ν. It is convenient to introduce the four-vector L^μ ≡ (m_2^2+m_1 m_2 γ)p_1^μ - (m_1^2+m_1 m_2 γ)p_2^μ/m_1^2 m_2^2(γ^2-1) which satisfies nice relations, L· p_2 = 1 , L· p_1 = -1 , b· L=0 , ∇^μL_μ = 1/p_∞^2. where we used that impact parameter b^μ lies in the plane of scattering and is orthogonal to both p_1^μ and p_2^μ. Because L· q=O(q^2), we also have L· q=0 in q-space, before the Fourier transform to b-space. Since -p_1· q=p_2· q=q^22, p_1 and p_2 are indeed orthogonal to q in the classical limit. The decomposition in (<ref>) is clearly not valid for an arbitrary four-vector but it is satisfied by the momentum kick ⟨Δ P_1^μ⟩ when evaluated in the conservative sector as we will do in section <ref>. To evaluate the classical part of the first commutator A_1^O^ν =1ħ⟨ p_1',p_2'|[N̂,Ô^ν]| p_1,p_2⟩ using the expression (<ref>) we begin by acting with the derivative ∇_μ in (<ref>). It is useful to note that ∇_μ (p_1+p_2)^2=0 and ∇_μ k_1^ν=0 so that ∇_μ O_r((p_1+p_2)^2,-k_1)=0 for both r=∥ and r= ⊥. We then get ∇_μ(O^ν(p_1,p_2,- k_1))δ(2 p_1 · k_1)δ(-2 p_2 · k_1)) =1/p_∞^2 O_∥((p_1+p_2)^2,k_1^2) δ_μ^νδ(2 p_1 · k_1)δ(-2 p_2 · k_1) + 2 k_1μ(O_∥((p_1+p_2)^2,k_1^2)L^ν - O_⊥((p_1+p_2)^2,k_1^2) k_1^ν) (δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1))  . which we can insert into eq. (<ref>), keeping only the classical part: A_1^O^ν(p_1,p_2,q)=1/p_∞^2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1^ν+q^ν)O_∥((p_1+p_2)^2,k_1^2)δ(2 p_1 · k_1)δ(-2 p_2 · k_1) + 2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1 (O_∥((p_1+p_2)^2,k_1^2)L^ν) ×(δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1)) -2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1(O_⊥(γ,k_1^2) k_1^ν)×(δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1)). By symmetry the integral in the second line vanishes. We thus have A_1^O^μ(p_1,p_2,q) = 1/p_∞^2 A_1^O_∥μ(γ,q^2) + A_1^O_⊥μ(γ,q^2) with A_1^O_∥μ(γ,q^2) ≡∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1^μ+q^μ)O_∥((p_1+p_2)^2,k_1^2)δ(2 p_1 · k_1)δ(-2 p_2 · k_1), and A_1^O_⊥μ(γ,q^2)≡-2∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1(O_⊥(γ,k_1^2) k_1^μ)×(δ'(2 p_1 · k_1)δ(-2 p_2 · k_1)+δ(2 p_1 · k_1)δ'(-2 p_2 · k_1)). which by tensorial reduction leads to A_1^O_⊥μ(γ,q^2)=- L^μ∫d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1 O_⊥(γ,k_1^2) δ(2 p_1 · k_1)δ(-2 p_2 · k_1). We note an interesting swap between longitudinal and transverse parts in this first iteration. Clearly, when we iterate further, this will generate alternating contributions between the longitudinal and transverse parts. To complete the evaluation of the observable according to the KMOC prescription we now perform the Fourier transform to b-space according eq. (<ref>). Having already taken the classical limit, it is clear that we can also ignore the q^2-terms in the two delta-functions and effectively the Fourier transform simply becomes Õ(γ,b)=∫d^D q/(2π)^D-2δ(-2 p_1 · q)δ(2 p_2 · q) O((p_1+p_2)^2,q^2)e^i b · q. For the longitudinal part we have to evaluate the Fourier transform of A_1^O_∥μ(γ,q^2) which reads ∫d^D q/(2π)^D-2d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (q^μ+k_1^μ) O_∥((p_1+p_2)^2,k_1^2) δ(2 p_1 · k_1)δ(-2 p_2 · k_1)×δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q . and by a change of variables q → q-k_1 and k_1 → -k_1 the integral factorizes (<ref>)=∫d^D q/(2π)^D-2 q^ν N(γ,q^2) δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q×∫d^D k_1/(2π)^D-2 O_∥((p_1+p_2)^2,k_1^2) δ(-2 p_1 · k_1)δ(2 p_2 · k_1) e^i b · k_1. Setting Õ_∥(γ,b)≡∫d^D k_1/(2π)^D-2 O_∥((p_1+p_2)^2,k_1^2) δ(-2 p_1 · k_1)δ(2 p_2 · k_1) e^i b · k_1 and noticing that -i∂Ñ(γ,b)∂ b_ν=∫d^D q/(2π)^D-2 q^ν N(γ,q^2) δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q. with Ñ(γ,J)  ≡ FT[N(γ,q^2)] ≡ 1/4m_1m_2√(γ^2 - 1)∫d^2q/(2π)^2 N(γ,q^2) e^ib·q. Therefore the Fourier transform of A_1^O_∥μ(γ,q^2) is given by - i ∂Ñ(γ,b)/∂ b_νÕ_∥(γ,b) = i b^ν |b|∂Ñ(γ,b)/∂ |b|Õ_∥(γ,b) . For the transverse part we have to evaluate Ã_1^O_⊥μ(γ,b)=- L^μ∫d^D q/(2π)^D-2d^D k_1/(2π)^D-2 N(γ,(k_1+q)^2) (k_1+q)· k_1 O_⊥(γ,k_1^2) ×δ(2 p_1 · k_1)δ(-2 p_2 · k_1)δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · q By the same change of variable than before we get Ã_1^O_⊥μ(γ,b)=L^μ∫d^D q/(2π)^D-2d^D k_1/(2π)^D-2 N(γ,q^2) q· k_1 O_⊥(γ,k_1^2) ×δ(-2 p_1 · k_1)δ(2 p_2 · k_1)δ(-2 p_1 · q)δ(2 p_2 · q)e^i b · qe^i b · k_1 This integral is product of a Fourier transform over q times a Fourier transform over k_1 leading to Ã_1^O_⊥μ(γ,b)=-L^μ∂Ñ(γ,b)/∂ b^ν∂Õ_⊥(γ,b)/∂ b_ν=L^μ∂Ñ(γ,b)/∂ |b|∂Õ_⊥(γ,b)/∂ |b|. Collecting these pieces, we get Ã_1^O^μ(γ,b)=( i/p_∞^2b^ν/| b |Õ_∥(γ,b) + L^ν∂Õ_⊥(γ,b)/∂| b |)∂Ñ(γ,b)/∂| b |. In term of the angular momentum J=p_∞ |b|, with p_∞=m_1m_2√(γ^2-1)/√(m_1^2+m_2^2+2m_1m_2γ) the magnitude of incoming three-momentum in the centre-of-mass frame, we have Ã_1^O^μ(γ,b)=( i/p_∞b^μ/| b |Õ_∥(γ,b) +p_∞ L^μ∂Õ_⊥(γ,b)/∂| b |)∂Ñ(γ,J)/∂| J |. The factorization of the Fourier transforms separate the N operator from the operator O in b-space. This remarkable fact implies that we can iterate the result above as dictated by the commutator relation in eq. (<ref>). It is convenient to introduce a matrix notation so that Ã_1^O^μ(γ,b)=[ L^μ i b^μ/|b| ][ 0 p_∞∂Ñ/∂| J |; 1/p_∞∂Ñ/∂| J | 0 ][ Õ_∥∂Õ_⊥∂|b| ] and Ã_n+1^O^μ(γ,b)=[ L^μ i b^μ/|b| ][ 0 p_∞∂Ñ/∂| J |; 1/p_∞∂Ñ/∂| J | 0 ]^n [ ∂Õ_⊥∂|b|∂Ñ∂ JÕ_∥ p_∞∂Ñ∂ J ] for summing the iteration of all order according the recursion in eq. (<ref>). Inserting it in the expression (<ref>), we get, in component form, ΔÕ^μ(γ,b)=[ L^μ i b^μ/|b| ]∑_n =1^∞(-i)^n/n![ 0 p_∞∂Ñ/∂| J |; 1/p_∞∂Ñ/∂| J | 0 ]^n-1[ p_∞∂Õ_⊥∂|b|∂Ñ∂ JÕ_∥ p_∞∂Ñ∂ J ]. which is resummed into ΔÕ^μ(γ,b)= [ L^μ ib^μ/|b| ][ -i sin(∂Ñ∂ J)/∂Ñ∂ J p_∞ (cos(∂Ñ∂ J)-1)/∂Ñ∂ J; cos(∂Ñ∂ J)-1/∂Ñ∂ J p_∞ -i sin(∂Ñ∂ J)/∂Ñ∂ J ][ p_∞∂Õ_⊥∂|b|∂Ñ∂ JÕ_∥ p_∞∂Ñ∂ J ]. This relation shows the intimate connection between the exponential representation of the S-matrix and the KMOC formalism. It is an interesting fact that the N̂-operatror is here sandwiched between the initial in-state and its conjugate rather than between in and out states as in ref. <cit.>. This is a consequence of the fact that the KMOC formalism evaluates observables as the difference between time evolved in-states whereas in <cit.>N(γ,b) was viewed as an ordinary scattering matrix element from which to compute the scattering angle through the radial action. It is also interesting to note how the iterative structure of the exponential representation makes N̂ matrix elements the universal objects to compute in the KMOC formalism, whereas all details of the actual observable O^μ only enter through the initial vector determined by Ã_1^O^ν(γ,b) in (<ref>). §.§.§ Momentum kick: the conservative sector We now finally apply the general considerations above to the case of the momentum kick of, say, particle with initial momentum p_1 in the scattering. We then have that the initial vector is Ã^P_1^μ(γ,b)= i p_∞b^μ|b|∂Ñ(γ,J)∂ J. We apply the equation (<ref>) with Õ_∥(γ,b)=p_∞^2 and Õ_⊥(γ,b)=0. We get ΔP̃_̃1̃^ν(γ,b)|_ cons=p_∞b^ν/| b |sin(∂Ñ(γ,J)/∂ J) + p_∞^2 L^ν(cos(-∂Ñ(γ,J)/∂ J)-1). In the conservative case, the scattering angle can be extracted by the coefficient of the transverse piece only. A comparison with the general relation between momentum kick and scattering angle <cit.>[The coefficient of sin(χ) is fixed by a quadratic condition. We choose the sign opposite to that of ref. <cit.>.] ΔP̃_̃1̃^ν(γ,b)|_ cons=-p_∞b^ν/| b |sin(χ) + p_∞^2 L^ν(cos(χ)-1), demonstrates that χ =  -∂Ñ(γ,J)∂ J =  - 1/p_∞∂Ñ(γ,b)/∂ b , thus proving the conjectured relation of ref. <cit.> between the scattering angle and the matrix elements of the N-operator. This also shows that the Ñ(γ,J) is the radial action. §.§ Including gravitational radiation We now turn to the impact of gravitational radiation on the expectation value of an operator Ô. We recall that in the KMOC formalism radiation is automatically taken into account in perturbation theory by insertion of a complete set of states (including any number of gravitons) in the pertinent in-in matrix elements. Conventionally done by means of the Born expansion of the T̂-matrix, we here adapt it to the exponential representation. In particular, we use the insertion of the identity operator inside the nested commutators and extract contributions order by order in the gravitational coupling G. To clarify: when going from T̂-matrix elements to N̂-matrix elements we also include terms that are radiative, to arbitrarily high order in the coupling G. What is missing in order to compute the full expectation value of an operator Ô are the pieces that arise from inserting complete sets of states (including gravitons) inside the nested commutators of eq. (<ref>). The discussion will clearly mimic closely the way we evaluated matrix elements of N̂-operator itself. We now consider these additional terms. Since our aim is to derive a recursive relation for the classical limit of an observable, we begin by analyzing the expectation value of Â^Ô_n+1 based on one iteration, ⟨ p_1',p_2'| Â^Ô^μ_n+1|p_1,p_2⟩=1ħ⟨ p_1',p_2'|[N̂, Â^Ô^μ_n]|p_1,p_2⟩. Inserting a complete set of states, this has a graphical representation A_n+1^O^μ(p_1,p_2,q) = [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n^O^μ; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; + [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [black,snake it] (-1.5,0) to (1.5,0); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n^O^μ; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [black,snake it] (-1.5,0) to (1.5,0); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +⋯ where the ellipsis represent pieces with insertion of more that one graviton. We stress that this involves the full N̂-operator and in perturbation theory we obviously need to truncate to the given order in G (but for now we keep it general). An additional iteration reads A_n+1^O^μ(p_1,p_2,q) = [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n^O^μ; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +[scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n-1^O^μ; + [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; - 2 [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,-1) to (-2,-2); [black,snake it] (-2,-2) to (1.6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-2.3) to (-2,-1.7); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; - [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; - [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; + [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,-1) to (-2,-2); [black,snake it] (-2,-2) to (1.6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-2.3) to (-2,-1.7); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A_n-1^O^μ; + [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,-1) to (-2,-2); [black,snake it] (-2,-2) to (1.6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-2.3) to (-2,-1.7); [color = black, fill=white, very thick] (-6,0) circle (1cm) node A_n-1^O^μ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +... where again the ellipsis indicate diagrams with more than one graviton exchange. By combining and soft-expanding all terms in last four lines we find that their classical parts sum up to zero. This implies that for n≥3 single gravitons cannot be exchanged and we are left with A^O^μ_n(p_1,p_2,q)= [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node N; [color = black, fill=white, very thick] (2,0) circle (1cm) node A^O^μ_n-1; - [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [color = black, fill=white, very thick] (-2,0) circle (1cm) node A^O^μ_n-1; [color = black, fill=white, very thick] (2,0) circle (1cm) node N; +... This concludes the analysis of single-graviton insertions from the complete set of states. Actually, what we just shown can be generalized to any number of graviton insertions. However, at three-loop level, and as noticed in ref. <cit.> in the context of the eikonal, multiple graviton insertions such as [scale=0.5] [black,very thick] [->] (-4,-.8) to (4,-.8); [black,very thick] [->] (-4,.8) to (4,.8); [black,snake it] (-1.5,0.3) to (1.5,0.3); [black,snake it] (-1.5,-0.3) to (1.5,-0.3); [red] (0,-1) to (0,-.6); [red] (0,1) to (0,.6); [red] (0,.15) to (0,.45); [red] (0,-.15) to (0,-.45); [color = black, fill=white, very thick] (-2,0) circle (1cm) node ; [color = black, fill=white, very thick] (2,0) circle (1cm) node ; do not contribute classically. To fourth Post-Minkowskian order we thus only need to consider successions of single gravitons insertions such as [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node ; [color = black, fill=white, very thick] (2,0) circle (1cm) node ; When we include these radiative pieces we need to enlarge the basis for vector operators. We choose to introduce u_1^μ≡ p_∞m_1 γ p_2^μ-m_2 p_1^μ/m_1^2 m_2(γ^2-1), u_2^μ≡ p_∞m_2 γ p_1^μ-m_1 p_2^μ/m_1 m_2^2(γ^2-1), which satisfy p_i· u_j=p_∞δ_ij. These two four-vectors are related to the ǔ_i's of <cit.> by a rescaling u_i=ǔ_i p_∞/m_i with i=1,2. The vector L^μ of eq. (<ref>) which sufficed to describe the basis in the conservative sector is simply a specific linear combination, L^μ= (u_2^μ-u_1^μ)/ p_∞ . Note that every vector X^μ can be decomposed into X^μ=p_1.X/p_∞u_1^μ+p_2.X/p_∞u_2^μ+ (b.X/|b|)b^μ/|b|≡ X^u_1u_1^μ+X^u_2u_2^μ+X^bb^μ/|b| Similar manipulations to the ones of section <ref> yieelds a compact matrix identity in b-space, now taking into account the radiation effects with at most one graviton exchange. For n≥ 3 we have ⟨ p_1',p_2'|Â^Ô^μ_n+1|p_1,p_2⟩= [ u_1^μ u_2^μ b^μ |b| ] M [ ⟨ p_1',p_2'|Â^Ô_n|p_1,p_2⟩^u_1⟨ p_1',p_2'|Â^Ô_n|p_1,p_2⟩^u_2⟨ p_1',p_2'|Â^Ô_n|p_1,p_2⟩^b ] where we have defined the matrix M≡[ 0 0 i∂Ñ∂ J; 0 0 -i∂Ñ∂ J; - i ℰ_2∂Ñ∂ J iℰ_1∂Ñ∂ J 0 ] and we have introduced the fractionial parts of the Mendelstam variable s=E^2, ℰ_1 and ℰ_2 ℰ_1≡m_1^2+m_1m_2γ m_1^2+m_2^2+2m_1m_2γ; ℰ_2≡ 1-ℰ_1= m_2^2+m_1m_2γ m_1^2+m_2^2+2m_1m_2γ. As we have shown, at fourth Post-Minkowskian order we can write the full result as ΔÕ(γ,b)= ΔÕ_ cons(γ,b )+∑_n=1^∞ΔÕ_ rad^(n)(γ,b) where ΔÕ_ rad^(n) is the contribution coming from the succession of n single-graviton insertions. The conservative part is given by ΔÕ_ cons(γ,b )=[ u_1^μ u_2^μ b^μ |b| ]∑_n ≥ 1(-i)^n/n! M^n-1[ Õ^u_1_1Õ^u_2_1Õ^b_1 ] =i[ u_1^μ u_2^μ b^μ |b| ][ -(∂Ñ∂ Jℰ_1+ℰ_2 sin(∂Ñ∂ J))/∂Ñ∂ J -ℰ_1 (∂Ñ∂ J-sin(∂Ñ∂ J))/∂Ñ∂ J -1+ cos(∂Ñ∂ J)/∂Ñ∂ J; -ℰ_2 (∂Ñ∂ J-sin(∂Ñ∂ J))/∂Ñ∂ J -(ℰ_1 sin(∂Ñ∂ J)+∂Ñ∂ Jℰ_2)/∂Ñ∂ J 1-cos(∂Ñ∂ J)/∂Ñ∂ J; ℰ_2(1-cos(∂Ñ∂ J))/∂Ñ∂ J ℰ_1 (cos(∂Ñ∂ J)-1)/∂Ñ∂ J -sin(∂Ñ∂ J)/∂Ñ∂ J ][ Õ^u_1_1Õ^u_2_1Õ^b_1 ], where we have introduced the operator Ô_1≡ [N̂, Ô]. This is just a different way of writing the conservative result of eq. (<ref>), as can be seen by use of the relations (<ref>) and (<ref>). For the radiative sector we get ΔÕ_ rad^(k)(γ,b)= [ u_1^μ u_2^μ b^μ |b| ]∑_n ≥ k+1(-i)^n/n! M^n-1-k[ Õ^u_1_k+1Õ^u_2_k+1Õ^b_k+1 ] with Ô_k+1 = [N̂,[N̂,…,[N̂,Ô]]]_k+1 times|_k graviton insertions after restricting to k graviton insertions, as explained above. This is the complete expression to fourth Post-Minkowskian order and it is readily generalized to higher orders. We emphasize again that the terminology of conservative and radiative pieces is completely artificial. There are also radiative modes in what we for historical reasons call the conservative part. This was already obvious at two-loop level where it was shown in refs. <cit.> that the two-to-two matrix element of N̂-operator yields the full result, including radiation reaction, to that order. We now understand why this phenomenon does not generalize to higher orders, and we understand how to correct for it. There are still many radiative modes and radiation-reaction parts in just the two-to-two matrix element of N̂-operator and therefore those matrix elements are far from being just conservative. §.§.§ Full momentum kick at fourth Post-Minkowskian order We now turn to the full explicit evaluation of the momentum kick Δ P_1^μ at fourth Post-Minkowskian order. As a building block we will first need to first compute Ñ(γ,b). This was already done in ref. <cit.> up to 4PM order (except for one term which we take the opportunity to correct here) so that what we label the conservative piece Δ P_1^μ|_ cons.=[ u_1^μ u_2^μ b^μ |b| ][ p_∞(1- cos(χ_cons))p_∞(cos(χ_cons)-1)-p_∞sin(χ_cons) ] is known. Here it is convenient to introduce the following notation χ_cons≡-∂Ñ/∂ J and define the PM-expanded quantities χ_cons≡∑_n=0^∞ G^n+1χ^(n)_cons as well as Ñ≡∑_n=0^∞ G^n+1Ñ^(n) so that at fourth Post-Minkowskian order we have Δ P_1^μ,4PM|_ cons. =p_∞ G^4 [ u_1^μ u_2^μ b^μ |b| ]G^4[ - (χ_cons^(0))^4/24+(χ_cons^(1))^2/2+ χ_cons^(0)χ_cons^(2) (χ_cons^(0))^4/24-(χ_cons^(1))^2/2- χ_cons^(0)χ_cons^(2)(χ_cons^(0))^2χ_cons^(1)/2-χ_cons^(3) ] Starting at third Post-Minkowskian order we need to also evaluate the first radiation contribution to the momentum kick ΔP̃_1, rad^μ (1). We thus need the building block P̃_1,1^μ=⟨ p_1',p_2' | [N̂,[N̂,P̂_1^μ]]|p_1,p_2⟩ evaluated with one-graviton insertions. This reads P̃_1,1^μ=FT[∫d^D q_1 d^D q_2/(2π)^2D-4⟨ p_1',p_2' |N̂| p_1+q_1,p_2-q_2,q_2-q_1⟩ (-q^μ-2q_1^μ) ×⟨ p_1+q_1,p_2-q_2,q_2-q_1 |N̂| p_1,p_2⟩δ(2p_1 · q_1)δ(-2p_2 · q_2)δ((q_2-q_1)^2)] Where again, for compactness of notation, we label the Fourier Transform into b-space by FT. Its precise definition is given in eq. (<ref>). Note that this integral is orthogonal to p_1, i.e. p_1μ⟨ p_1',p_2'|[N̂,[N̂,P̂_1^μ]]|p_1,p_2⟩ =0 , so that it can be decomposed according to P̃_1,1^μ=[ u_1^μ u_2^μ b^μ |b| ][ 0 P̃_1,1^u_2P̃_1,1^b ] Based on the analysis of ref. <cit.> we know that the coefficients have the following perturbative expansion P̃_1,1^u_2 =G^3 P̃_1,1^u_2,(2) +G^4 P̃_1,1^u_2,(3) +𝒪(G^5), P̃_1,1^b =G^4 P̃_1,1^b,(3)+𝒪(G^5). so that ΔP̃_1, rad^ν,(1)=G^3 [ u_1^μ u_2^μ b^μ |b| ][ 0; -P̃_1,1^u_2,(2)/2; 0 ]+G^4 [ u_1^μ u_2^μ b^μ |b| ][ 0; -P̃_1,1^u_2,(3)/2; ℰ_1 χ_cons^(1)P̃_1,1^u_2,(2)/6 -P̃_1,1^b,(3)/2 ]+𝒪(G^5) Note in particular that P̃_1,1^b only receives a contribution from order O(G^4), the 4PM order. As mentioned above, the 3PM case is therefore quite special in that all radiative effects are entirely contained in the classical contribution from the N̂-operator <cit.>. The momentum kick due to radiation at 3PM order only shifts the longitudinal momenta. Starting at fourth Post-Minkowskian order we need to also evaluate of the first radiative contribution to the momentum kick ΔP̃_1, rad^μ (2) which, as indicated, involves the insertion of two graviton lines. This contribution is more tricky and is diagrammatically represented by [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] [->] (-8,.8) to (4,.8); [black,snake it] (-6,0) to (-2,0); [black,snake it] (-2,0) to (2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-4,.3) to (-4,-.3); [red] (0,.3) to (0,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node ; [color = black, fill=white, very thick] (-2,0) circle (1cm) node ; [color = black, fill=white, very thick] (2,0) circle (1cm) node ; which has two pieces at 4PM order: [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] (-8,.8) to (-6,.8); [black,very thick] [->] (2,.8) to (4,.8); [black,snake it] [->] (-6,.8) to (2,.8); [black,very thick] (-2,1.5) to (2,1); [black, very thick] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; , [scale=0.5] [black,very thick] [->] (-8,.8) to (4,.8); [black,very thick] (-8,-.8) to (-6,-.8); [black,very thick] [->] (2,-.8) to (4,-.8); [black,snake it] [->] (-6,-.8) to (2,-.8); [black,very thick] (-2,-1.5) to (2,-1); [black, very thick] (-2,-1.5) to (-6,-1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,-1.8) to (-2,-1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; giving rise to the elementary building block P̃_1,2^μ=⟨ p_1',p_2' | [N̂,[N̂,[N̂,P̂_1^μ]]] |p_1,p_2⟩ evaluated with two-graviton insertions. This is P̃_1,2^μ =G^4 FT[q^μ⟨ p_1',p_2' |N̂_0^radN̂_0N̂_0^rad| p_1,p_2⟩] +G^4 FT[∫d^D q_1 d^D q_2 d^D q_3/(2π)^3D-6 (-3 q_1^μ+3 q_3^μ)⟨ p_1',p_2' |N̂_0^rad| p_1+q_3,p_2-q_2,q_2-q_3⟩ ×⟨ p_1+q_3,q_2-q_3 |N̂_0 | p_1+q_1,q_2-q_1⟩δ(2p_1 · q_3)δ(-2p_2 · q_2)δ^(+)((q_2-q_3)^2) ×⟨ p_1+q_1,p_2-q_2, q_2-q_1 |N̂_0^rad| p_1,p_2⟩δ(2p_1 · q_1)δ^(+)((q_2-q_1)^2)]+O(G^5) ≡ 6 G^4 FT[q^μ L_2(γ,q^2)]+G^4 P̃_1,2^μ,(3)+O(G^5) = 6 i G^4 p_∞b^μ/|b|∂L̃_2(γ,J)/∂ J+G^4 P̃_1,2^μ,(3)+O(G^5) so that its contribution to the momentum kick becomes ΔP̃_1, rad^ν,(2)=G^4 [ u_1^μ u_2^μ b^μ |b| ][ 0; i/6P̃_1,2^u_2,(3); -p_∞∂L̃_2(γ,J)/∂ J+ i/6P̃_1,2^b,(3) ]+𝒪(G^5) Combining all pieces, the full fourth-order momentum kick is thus given by ΔP̃_1^ν,4PM= G^4 [ u_1^μ u_2^μ b^μ |b| ][ p_∞(- (χ_cons^(0))^4/24+(χ_cons^(1))^2/2+ χ_cons^(0)χ_cons^(2)); p_∞( (χ_cons^(0))^4/24-(χ_cons^(1))^2/2- χ_cons^(0)χ_cons^(2))-P̃_1,1^u_2,(3)/2+i/6P̃_1,2^u_2,(3); p_∞((χ_cons^(0))^2χ_cons^(1)/2-χ_cons^(3)-∂L̃_2(γ,J)/∂ J)+ℰ_1 χ_cons^(1)P̃_1,1^u_2,(2)/6 -P̃_1,1^b,(3)/2 + i/6P̃_1,2^b,(3) ] We note the partial recycling of lower-order terms here, a feature that generalizes to higher orders as well. § DETAILS ON THE 4PM CALCULATION §.§ The construction of the integrands To perform the full explicit computation of the momentum, we need to compute only three integrands giving Ñ^(3), P̃_1,1^μ,(3) and P̃_1,2^μ,(3). The three integrands can be represented as [scale=0.5] [black,very thick] [->] (-2,-.8) to (2,-.8); [black,very thick] [->] (-2,.8) to (2,.8); [color = black, fill=white, very thick] (0,0) circle (1cm) node N_3; -(q^μ+2q_1^μ) [scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_1^ rad; -(q^μ+2q_1^μ) [scale=0.5] [black,very thick] [->] (-8,-.8) to (0,-.8); [black,very thick] [->] (-8,.8) to (0,.8); [black,snake it] (-5,0) to (-2,0); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (-4,.3) to (-4,-.3); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_1^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0^ rad; 3(q_3^μ-q_1^μ) [scale=0.5] [black,very thick] [->] (-8,-.8) to (4,-.8); [black,very thick] (-8,.8) to (-6,.8); [black,very thick] [->] (2,.8) to (4,.8); [black,snake it] [->] (-6,.8) to (2,.8); [black,very thick] (-2,1.5) to (2,1); [black, very thick] (-2,1.5) to (-6,1); [red] (-4,-1) to (-4,-.6); [red] (-4,1) to (-4,.6); [red] (0,1) to (0,.6); [red] (0,-1) to (0,-.6); [red] (-2,1.8) to (-2,1.2); [color = black, fill=white, very thick] (-6,0) circle (1cm) node N_0^ rad; [color = black, fill=white, very thick] (-2,0) circle (1cm) node N_0; [color = black, fill=white, very thick] (2,0) circle (1cm) node N_0^ rad; We compute these from generalized unitarity and velocity cuts, selecting topologies that both have three velocity cuts and respect the conditions on the on-shell gravitons when imposed by the topology. §.§ The integration basis At fourth Post-Minkowskian order the computation of the momentum kick is expanded on two sets of master integrals. A first family of master integrals has delta-function constraints on the massive legs and one graviton propagator as depicted in fig. <ref>(a) 𝒥({n_j},{±,±,±};γ,ϵ)= ∫δ(2v_1·ℓ_1) δ(2v_1 · (ℓ_1+ℓ_2+ℓ_3)) δ(2v_2 · (ℓ_1+ℓ_2))δ(ℓ_2^2)/∏_i=1^12D_i^n_i∏_r=1^3 d^4-2ϵℓ_i (2π)^3-2ϵ. where we have defined the propagators D_1 =ℓ_1^2, D_2=ℓ_2^2, D_3=ℓ_3^2, D_4 =(ℓ_1+ℓ_2)^2, D_5=(ℓ_2+ℓ_3)^2 , D_6=(ℓ_1+ℓ_2+ℓ_3)^2, D_7 =(ℓ_1+q̂)^2, D_8=(ℓ_1+ℓ_2+q̂)^2, D_8=(ℓ_1+ℓ_2+ℓ_3+q̂)^2 , D_9^± =± 2 v_1 · (ℓ_1+ℓ_2)+iε, D_10^±=± 2v_2 ·ℓ_1+iε , D_11^±=± 2v_2 · (ℓ_1+ℓ_2+ℓ_3)+iε, with q̂^2=-1, and v_i=p_i/m_i such that v_i^2=1 and v_i· q=0 for i=1,2, γ=v_1· v_2. Tensorial reductions are conveniently performed using LiteRed <cit.> which has by default the Feynman +iε prescription. We find that for the set of master integrals in (<ref>) the basis needed for longitudinal pieces has dimension 54, and the one for the transverse pieces has the same dimension. These master integrals have a delta-function for one of the graviton propagators as required in the one-graviton radiative sector analyzed in section <ref>. This delta-function breaks the symmetry between l_2 and l_3 compared to the other basis. In the conservative sector of section <ref> it is enough to use the smaller set of master integrals represented in figure <ref>(b) given by ℐ({n_j},{±,±,±};γ,ϵ)= ∫δ(2v_1·ℓ_1) δ(2v_1 · (ℓ_1+ℓ_2+ℓ_3)) δ(2v_2 · (ℓ_1+ℓ_2))/∏_i=1^12D_i^n_i∏_r=1^3 d^4-2ϵℓ_i (2π)^3-2ϵ. The tensorial reduction gives a basis of dimension 40. This basis are also sufficient to compute the second radiative term P̃_1,2^μ,(3), which differs only by the boundary conditions we impose in the static γ=1 limit. The world-line computation of <cit.> uses master integrals with delta-function velocity cuts on three massive propagators at fourth Post-Minkowskian order. But they have either Feynman or retarded or advanced propagators and in the end use a total of 576 master integrals. Converting the retarded (respectively advanced) propagator to a Feynman propagator using i (ℓ_0± iϵ)^2-ℓ⃗^2= iℓ_0^2-ℓ⃗^2+iϵ∓πδ(ℓ_0) θ(∓ℓ_0) allows to expand the master integrals used in <cit.> on the basis of master integrals in (<ref>). As in ref. <cit.> we compute the integrals by solving three differential systems of sizes 40× 40, 54×54 and 54 × 54 respectively. There are three regions of integration, potential-potential (PP), potential-radiation (PR) and radiation-radiation (RR). We expand all master integrals in each of these regions, which gives boundary datas to solve and check the solution of the differential systems. In the end, each master integral can be expanded on 9 independent static master integrals (6 for the transverse pieces, 3 for the longitudinal contributions) as ℐ^⊥(γ)=∑_j=1^3 c_PP,⊥^j(γ) I_PP,⊥^j +(4(γ^2-1))^-ϵ∑_j=1^2 c_PR,⊥^j(γ) I_PR,⊥^j+(4(γ^2-1))^-2ϵc_RR,⊥(γ) I_RR,⊥ and ℐ^∥(γ)=(4(γ^2-1))^-ϵ∑_j=1^2 c_PR,∥^j(γ) I_PR,∥^j+(4(γ^2-1))^-2ϵc_RR,∥(γ) I_RR,∥ The final step is then to compute each static master integral with the correct constraint on its graviton propagator (Feynman propagator or delta-function) according to the integrand it going to contribute to. §.§ The final result for the 4PM momentum kick §.§.§ The N-matrix elements For the so-called conservative part (the N-matrix elements), we first recall the results up to 3PM order, Ñ^(0)=-2Gm_1 m_2(2γ^2-1)/√(γ^2-1)Γ(-ϵ)J^2ϵ Ñ^(1)=3π G^2m_1^2 m_2^2(m_1+m_2)(5γ^2-1)/4√(s)1/J Ñ^(2)=G^3 m_1^3 m_2^3 √(γ^2-1)/s(s(64 γ^6 - 120 γ^4 + 60 γ^2-5)/3(γ^2-1)^2-4/3 m_1 m_2 γ (14 γ^2+25) +4 m_1 m_2(3+12 γ^2-4 γ^4) arccosh(γ)/√(γ^2-1) +2m_1 m_2(2γ^2-1)^2/√(γ^2-1)(8-5γ^2/3(γ^2-1)+γ(-3+2γ^2) arccosh(γ)/(γ^2-1)^3/2) )1/J^2 Almost all of this 4PM part of Ñ was already computed in ref. <cit.>, except for one term which we correct here. The velocity cuts automatically eliminate super-classical terms, so that the generalized unitarity integrand arises directly from ⟨ p_1',p_2'| T̂_3 |p_1,p_2⟩+L_0. To this we must add L_1 which precisely cancel the imaginary radiation pieces as at 3PM order. Note also that the real piece from L_1 is canceled by a similar computation as we did in section 3.2. At the end we get Ñ^(3)=Ñ_PP+RR^(3)+Ñ_PR^(3)+L̃_2 with Ñ^(3)_PP+RR=-G^4 (m_1+m_2)^3 m_1^4 m_2^4 π (γ^2-1)/8 s^3/2( ℳ_4^p+ν (4 ℳ^t_4 log(√(γ^2-1)/2)+ ℳ^π^2_4+ ℳ^rem_4))1/J^3 Ñ^(3)_PR=-G^4 (m_1+m_2)^3 m_1^4 m_2^4 π (γ^2-1)/8s^3/2(6πν(2γ^2-1)(5γ^2-1)ℐ(γ)/√(γ^2-1)) 1/J^3 and ℐ(γ) ≡(16-10γ^2/3γ^2-1+2γ(-3+2γ^2) arccosh(γ)/γ^2-1) where for convenience of the reader we have separated the pieces in terms of regions of integration (potential P and radiation R) and used the same notations as in ref. <cit.>. Note that, as already observed in a different context in ref. <cit.>, the L_2 Compton-like term that we have in the conservative piece will exactly cancel the one in the second radiative piece. §.§.§ The first radiation piece At 3PM order the value of the coefficient of the first radiation piece can be extracted from ref. <cit.> P̃_1,1^u_2,(2)=-2 m_1^2 m_2^3 p_∞^2/J^3ℰ(γ) with ℰ(γ)/π≡1151 - 3336 γ + 3148 γ^2 - 912 γ^3 + 339 γ^4 - 552 γ^5 + 210 γ^6/48 (γ^2-1)^3/2 + γ (-3 + 2 γ^2) (11 - 30 γ^2 + 35 γ^4)/ 16 (γ^2-1)^2arccosh(γ)- -5 + 76 γ - 150 γ^2 + 60 γ^3 + 35 γ^4/8 √(γ^2-1)log(1 + γ/2) while at 4PM order we have performed the computation and find for the longitudinal part P̃_1,1^u_2,(3) =2 m_1^2 m_2^3 p_∞^3/J^4((m_1 g[1]+m_2 h[1])π^2/192 (γ^2-1)^2+ m_1 g[2]+m_2 h[2]/705600 γ^8 (γ^2-1)^5/2 +(m_1g[3]+m_2 h[3]/6720 γ^9(γ^2-1)^3+(m_1g[4]+m_2h[4]) log(2)/8(γ^2-1)^2)arccosh(γ) +(m_1 g[5]+m_2 h[5]/(γ^2-1)^7/2+m_1g[6]+m_2 h[6] /(γ^2-1)^2)arccosh^2(γ)+m_1g[7]+m_2 h[7]/8(γ^2-1)^2arccosh(γ) log(γ) +m_1g[8]+m_2 h[8]/8(γ^2-1)^2(arccosh(γ) log(1+γ/2)-2 _2(-γ+√(γ^2-1))) +m_1g[9]+m_2 h[9]/32(γ^2-1)^2(_2(γ-1/γ+1)-4_2(√(γ-1/γ+1))) -m_1g[10]+m_2 h[10]/16 (γ^2-1)^2_2(-(γ-√(γ^2-1))^2)) with g[1] =γ (-1485 + 4993 γ^2 - 3195 γ^4 + 1575 γ^6) g[2] =385875 - 1837500 γ^2 + 7188300 γ^4 - 21241500 γ^6 + 767410066 γ^8 + 3966858415 γ^10 - 3429240286 γ^12 - 791542442 γ^14 + 393897472 γ^16 g[3] =3675 - 19950 γ^2 + 79800 γ^4 - 246540 γ^6 + 222810 γ^8 - 25426269 γ^10 - 37185456 γ^12 + 46406238 γ^14 + 2662204 γ^16 - 3592192 γ^18 g[4] =1263 - 3883 γ^2 + 1065 γ^4 - 525 γ^6 g[5] =32 γ^2 (60 + 35 γ^2 - 59 γ^4 + 4 γ^8) g[6] =8 γ (-9 + 26 γ^2) g[7] =γ (1041 - 2773 γ^2 - 1065 γ^4 + 525 γ^6) g[8] =3 (37 γ - 185 γ^3 + 355 γ^5 - 175 γ^7) g[9] =6 (6 - 37 γ - 66 γ^2 + 185 γ^3 + 210 γ^4 - 355 γ^5 - 150 γ^6 + 175 γ^7) g[10] =γ (1041 - 2773 γ^2 - 1065 γ^4 + 525 γ^6) h[1] =2 (2075 + 17367 γ^2 + 5553 γ^4 - 6819 γ^6) h[2] =490 γ (1575 - 8250 γ^2 + 35710 γ^4 - 142640 γ^6 - 5560073 γ^8 - 417302 γ^10 + 4034092 γ^12 - 587336 γ^14 + 6144 γ^16) h[3] =14 γ (525 - 3100 γ^2 + 13690 γ^4 - 55260 γ^6 + 816595 γ^8 + 3752006 γ^10 - 1978290 γ^12 - 1029342 γ^14 + 213480 γ^16 + 24576 γ^18) h[4] =-2 (2057 + 15261 γ^2 + 3387 γ^4 - 4321 γ^6) h[5] =-32 γ (-3 + 2 γ^2) (-8 - 51 γ^2 - 6 γ^4 + 8 γ^6) h[6] =16 (16 + 111 γ^2 + 18 γ^4 - 24 γ^6) h[7] =-2 (2039 + 13155 γ^2 + 1221 γ^4 - 1823 γ^6) h[8] =-2 (9 + 1053 γ^2 + 1083 γ^4 - 1249 γ^6) h[9] =6 (36 - 1209 γ + 4212 γ^2 - 6422 γ^3 + 4332 γ^4 + 1755 γ^5 - 4996 γ^6 + 2100 γ^7) h[10] =-2 (2039 + 13155 γ^2 + 1221 γ^4 - 1823 γ^6) For the transverse part we find P̃_1,1^b,(3)=-2m_1^2 m_2^2 p_∞^4/J^4((-2 γ^2-1/γ^2-1𝒞(γ)+γ(-3+2γ^2)/(γ^2-1)^3/2ℰ(γ))(m_1+m_2)+2γ^2-1/(γ+1)√(γ^2-1)ℰ(γ) m_1) with 𝒞(γ)/π≡-237 + 386 γ + 111 γ^2 - 683 γ^3 + 537 γ^4 + 240 γ^5 - 411 γ^6 + 105 γ^7/24 (γ^2-1)^2 -γ (-3 + 2 γ^2) (-12 + 19 γ + 72 γ^2 - 70 γ^3 - 60 γ^4 + 35 γ^5) /8 (γ^2-1)^5/2arccosh(γ) + -62 + 155 γ + 16 γ^2 - 70 γ^3 - 90 γ^4 + 35 γ^5/4 (γ^2-1)log(1 + γ/2) §.§.§ The second radiation piece The contributions from the second radiation piece matches exactly the result of ref. <cit.> with P̃_1,3^b,(3)=-6 i p_∞^4/J^4 c_4b,2rad^(4) diss and P̃_1,3^u_2,(3)=-6 i m_2 p_∞^3/J^4 c_4b,2rad^(4) diss Finally, when inserting all integrals into the formula of eq. (<ref>) we find complete agreement with ref. <cit.>. This amplitude-based approach, which combines the exponential representation of the gravitational S-matrix with the KMOC formalism, thus yields a result for the momentum kick that is in full agreement with the worldline calculation of ref. <cit.>. § CONCLUSION The exponential representation of the S-matrix <cit.> is a natural starting point for a semi-classical analysis of quantum field theory. Matrix elements of the N̂-operator in the exponent of the S-matrix are by construction free of superclassical terms and they are, therefore, at leading order providing the classical part, followed by quantum corrections. Using the KMOC-formalism, we have shown how the exponential representation of the S-matrix makes manifest the cancellation of superclassical contributions in the conservative sector. One advantage of working with the N̂-matrix rather than the conventional T̂-matrix is indeed that it bypasses the need to ensure the delicate cancellation between superclassical terms of the T̂-matrix. Instead, by extracting the relevant pieces of the N̂-matrix by means of velocity cuts we automatically retrieve the classical terms. Pictorially speaking, the velocity cuts introduced in <cit.> localize the massive scattering states on classical on-shell trajectories. As shown in section <ref> of the present paper, two-to-two massive matrix elements of the N̂-operator, Fourier-transformed to impact parameter space, is the radial action of the conservative sector. This proves the conjectured relation put forward in ref. <cit.>. Including gravitational radiation, the N̂-operator is still a basic building block of the KMOC-formalism and as an example we have shown how the momentum kick in the scattering of two black holes can be compactly described by matrix elements of N̂. We have provided the explicit formulas up to and including fourth Post-Minkowskian order but the framework is iterative and it is straightforward to derive corresponding expressions to arbitrarily high order in Newton's constant G. As an application we have explicitly derived the momentum kick at fourth post-Minkowskian order. Our results are in agreement with <cit.>. As is well known, and somewhat disturbingly, it leads to a scattering angle that diverges at high energy if one applies the scattering angle expression of ref. <cit.>. The solution for the integrals used here and in the references above is the one connecting smoothly to the Post-Newtonian expansion. We cannot exclude that another solution exists which is valid at high energy only and without a smooth connection to the Post-Newtonian limit. This possibility seems to deserve attention. Alternatively, one could consider doing a new fourth-order calculation from scratch with massless scalars. The resulting relationship between the KMOC-formalism and the exponential representation of the S-matrix is very simple and of a universal form involving trigonometric functions together with iterated commutators. This trigonometric structure arises from N̂ being the exponential phase operator of the S-matrix and is thus closely linked to the Euler formula. Beyond the conservative parts, the operator identities involved lead to additional terms but the structure of nested commutators is responsible for the simple algebraic relations that iteratively build up observables to higher and higher orders in the gravitational coupling constant. In the end, the expression for classical observables including all dissipative effects becomes remarkably simple by combining the KMOC formalism with the exponential representation of the S-matrix. The full calculation reduces to scattering amplitude evaluations for which modern techniques have become highly developed. There is thus no need to distinguish between different pieces or to separate the amplitude calculation into different types of contributions; one must only retain all classical terms, as this provides the full classical answer. §.§ Acknowledgements We thank Thibault Damour for comments. P.V. would like to thank the LAPTh for the hospitality during the completion of this work. The work of P.H.D. was supported in part by DFF grant 0135-00089A, the work of E.R.H. was supported by the Rozenthal Foundation and ERC Starting Grant No. 757978 from the European Research Council, and the research of P.V. has received funding from the ANR grant “SMAGP” ANR-20-CE40-0026-01. 99Damour:2016gwp T. Damour, “Gravitational scattering, Post-Minkowskian approximation and Effective One-Body theory,” Phys. Rev. D 94 (2016) no.10, 104015; [arXiv: 1609.00354 [gr-qc]]. Damour:2017zjx T. Damour, “High-energy gravitational scattering and the general relativistic two-body problem,” Phys. Rev. D 97 (2018) no.4, 044038; [arXiv:1710.10599 [gr-qc]]. Bjerrum-Bohr:2018xdl N. E. J. Bjerrum-Bohr, P. H. Damgaard, G. Festuccia, L. Planté and P. Vanhove, “General Relativity from Scattering Amplitudes,” Phys. Rev. Lett. 121 (2018) no.17, 171601; [arXiv:1806.04920 [hep-th]]. Cheung:2018wkq C. Cheung, I. Z. Rothstein and M. P. Solon, “From Scattering Amplitudes to Classical Potentials in the Post-Minkowskian Expansion,” Phys. Rev. Lett. 121 (2018) no.25, 251101; [arXiv:1808.02489 [hep-th]]. Damour:2019lcq T. Damour, “Classical and Quantum Scattering in Post-Minkowskian Gravity,” Phys. Rev. D 102 (2020) no.2, 024060 [arXiv:1912.02139 [gr-qc]]. Westpfahl:1985tsl K. Westpfahl, “High-Speed Scattering of Charged and Uncharged Particles in General Relativity,” Fortsch. Phys. 33 (1985) no.8, 417-493 doi:10.1002/prop.2190330802 Bern:2019nnu Z. Bern, C. Cheung, R. Roiban, C. H. Shen, M. P. Solon and M. Zeng, “Scattering Amplitudes and the Conservative Hamiltonian for Binary Systems at Third Post-Minkowskian Order,” Phys. Rev. Lett. 122 (2019) no.20, 201603; [arXiv:1901.04424 [hep-th]]. Bern:2019crd Z. Bern, C. Cheung, R. Roiban, C. H. Shen, M. P. Solon and M. Zeng, “Black Hole Binary Dynamics from the Double Copy and Effective Theory,” JHEP 10 (2019), 206 [arXiv:1908.01493 [hep-th]]. DiVecchia:2020ymx P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “Universality of Ultra-Relativistic Gravitational Scattering,” Phys. Lett. B 811 (2020), 135924 [arXiv:2008.12743 [hep-th]]. Damour:2020tta T. Damour, “Radiative Contribution to Classical Gravitational Scattering at the Third Order in G,” Phys. Rev. D 102 (2020) no.12, 124008 [arXiv:2010.01641 [gr-qc]]. Bini:2021gat D. Bini, T. Damour and A. Geralico, “Radiative Contributions to Gravitational Scattering,” Phys. Rev. D 104 (2021) no.8, 084031 [arXiv:2107.08896 [gr-qc]]. DiVecchia:2021ndb P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “Radiation Reaction from Soft Theorems,” [arXiv:2101.05772 [hep-th]]. DiVecchia:2021bdo P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “The Eikonal Approach to Gravitational Scattering and Radiation at 𝒪(G^3),” [arXiv:2104.03256 [hep-th]]. Amati:1993tb D. Amati, M. Ciafaloni and G. Veneziano, “Effective action and all order gravitational eikonal at Planckian energies,” Nucl. Phys. B 403 (1993), 707-724 Bjerrum-Bohr:2021vuf N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Planté and P. Vanhove, “Classical gravity from loop amplitudes,” Phys. Rev. D 104 (2021) no.2, 026009 [arXiv:2104.04510 [hep-th]]. Bjerrum-Bohr:2021din N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Planté and P. Vanhove, “The amplitude for classical gravitational scattering at third Post-Minkowskian order,” JHEP 08 (2021), 172 [arXiv:2105.05218 [hep-th]]. Bjerrum-Bohr:2022blt N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Plante and P. Vanhove, “The SAGEX Review on Scattering Amplitudes, Chapter 13: Post-Minkowskian expansion from scattering amplitudes,” J. Phys. A 55 (2022) no.44, 443014 doi:10.1088/1751-8121/ac7a78 [arXiv:2203.13024 [hep-th]]. Bjerrum-Bohr:2022ows N. E. J. Bjerrum-Bohr, L. Planté and P. Vanhove, “Effective Field Theory and Applications: Weak Field Observables from Scattering Amplitudes in Quantum Field Theory,” [arXiv:2212.08957 [hep-th]]. Cristofoli:2020uzm A. Cristofoli, P. H. Damgaard, P. Di Vecchia and C. Heissenberg, “Second-order Post-Minkowskian scattering in arbitrary dimensions,” JHEP 07 (2020), 122; [arXiv:2003.10274 [hep-th]]. Cristofoli:2019neg A. Cristofoli, N. E. J. Bjerrum-Bohr, P. H. Damgaard and P. Vanhove, “Post-Minkowskian Hamiltonians in general relativity,” Phys. Rev. D 100 (2019) no.8, 084040 [arXiv:1906.01579 [hep-th]]. Bern:2021dqo Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C. H. Shen, M. P. Solon and M. Zeng, “Scattering Amplitudes and Conservative Binary Dynamics at O(G^4),” Phys. Rev. Lett. 126 (2021) no.17, 171601 [arXiv:2101.07254 [hep-th]]. Damgaard:2021ipf P. H. Damgaard, L. Plante and P. Vanhove, “On an exponential representation of the gravitational S-matrix,” JHEP 11 (2021), 213 [arXiv:2107.12891 [hep-th]]. Bjerrum-Bohr:2021wwt N. E. J. Bjerrum-Bohr, L. Planté and P. Vanhove, “Post-Minkowskian radial action from soft limits and velocity cuts,” JHEP 03 (2022), 071 [arXiv:2111.02976 [hep-th]]. Bern:2021yeh Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C. H. Shen, M. P. Solon and M. Zeng, “Scattering Amplitudes, the Tail Effect, and Conservative Binary Dynamics at O(G4),” Phys. Rev. Lett. 128 (2022) no.16, 161103 [arXiv:2112.10750 [hep-th]]. Collado:2018isu A. K. Collado, P. Di Vecchia, R. Russo and S. Thomas, “The subleading eikonal in supergravity theories,” JHEP 10 (2018), 038 [arXiv:1807.04588 [hep-th]]. KoemansCollado:2019ggb A. Koemans Collado, P. Di Vecchia and R. Russo, “Revisiting the Second Post-Minkowskian Eikonal and the Dynamics of Binary Black Holes,” Phys. Rev. D 100 (2019) no.6, 066028 [arXiv:1904.02667 [hep-th]]. DiVecchia:2019myk P. Di Vecchia, A. Luna, S. G. Naculich, R. Russo, G. Veneziano and C. D. White, “A tale of two exponentiations in N=8 supergravity,” Phys. Lett. B 798 (2019), 134927 [arXiv:1908.05603 [hep-th]]. DiVecchia:2019kta P. Di Vecchia, S. G. Naculich, R. Russo, G. Veneziano and C. D. White, “A Tale of Two Exponentiations in 𝒩 = 8 Supergravity at Subleading Level,” JHEP 03 (2020), 173 [arXiv:1911.11716 [hep-th]]. Bern:2020gjj Z. Bern, H. Ita, J. Parra-Martinez and M. S. Ruf, “Universality in the classical limit of massless gravitational scattering,” Phys. Rev. Lett. 125 (2020) no.3, 031601 [arXiv:2002.02459 [hep-th]]. Parra-Martinez:2020dzs J. Parra-Martínez, M. S. Ruf and M. Zeng, “Extremal Black Hole Scattering at 𝒪(G^3): Graviton Dominance, Eikonal Exponentiation, and Differential Equations,” JHEP 11 (2020), 023 [arXiv:2005.04236 [hep-th]]. DiVecchia:2022owy P. Di Vecchia, C. Heissenberg and R. Russo, “Angular momentum of zero-frequency gravitons,” JHEP 08 (2022), 172 [arXiv:2203.11915 [hep-th]]. DiVecchia:2022piu P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “Classical Gravitational Observables from the Eikonal Operator,” [arXiv:2210.12118 [hep-th]]. Heissenberg:2022tsn C. Heissenberg, “Angular Momentum Loss Due to Tidal Effects in the Post-Minkowskian Expansion,” [arXiv:2210.15689 [hep-th]]. DiVecchia:2023frv P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, “The gravitational eikonal: from particle, string and brane collisions to black-hole encounters,” [arXiv:2306.16488 [hep-th]]. Kosower:2018adc D. A. Kosower, B. Maybee and D. O'Connell, “Amplitudes, Observables, and Classical Scattering,” JHEP 02 (2019), 137 [arXiv:1811.10950 [hep-th]]. Maybee:2019jus B. Maybee, D. O'Connell and J. Vines, “Observables and amplitudes for spinning particles and black holes,” JHEP 12 (2019), 156 [arXiv:1906.09260 [hep-th]]. Cristofoli:2021vyo A. Cristofoli, R. Gonzo, D. A. Kosower and D. O'Connell, “Waveforms from amplitudes,” Phys. Rev. D 106 (2022) no.5, 056007 [arXiv:2107.10193 [hep-th]]. Cristofoli:2021jas A. Cristofoli, R. Gonzo, N. Moynihan, D. O'Connell, A. Ross, M. Sergola and C. D. White, “The Uncertainty Principle and Classical Amplitudes,” [arXiv:2112.07556 [hep-th]]. Herrmann:2021lqe E. Herrmann, J. Parra-Martinez, M. S. Ruf and M. Zeng, “Gravitational Bremsstrahlung from Reverse Unitarity,” Phys. Rev. Lett. 126 (2021) no.20, 201602 [arXiv:2101.07255 [hep-th]]. Herrmann:2021tct E. Herrmann, J. Parra-Martinez, M. S. Ruf and M. Zeng, “Radiative classical gravitational observables at 𝒪(G^3) from scattering amplitudes,” JHEP 10 (2021), 148 [arXiv:2104.03957 [hep-th]]. Kalin:2020mvi G. Kälin and R. A. Porto, “Post-Minkowskian Effective Field Theory for Conservative Binary Dynamics,” JHEP 11 (2020), 106 [arXiv:2006.01184 [hep-th]]. Kalin:2020fhe G. Kälin, Z. Liu and R. A. Porto, “Conservative Dynamics of Binary Systems to Third Post-Minkowskian Order from the Effective Field Theory Approach,” Phys. Rev. Lett. 125 (2020) no.26, 261103 [arXiv:2007.04977 [hep-th]]. Kalin:2020lmz G. Kälin, Z. Liu and R. A. Porto, “Conservative Tidal Effects in Compact Binary Systems to Next-to-Leading Post-Minkowskian Order,” Phys. Rev. D 102 (2020), 124025 [arXiv:2008.06047 [hep-th]]. Mogull:2020sak G. Mogull, J. Plefka and J. Steinhoff, “Classical black hole scattering from a worldline quantum field theory,” JHEP 02 (2021), 048 [arXiv:2010.02865 [hep-th]]. Jakobsen:2021smu G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, “Classical Gravitational Bremsstrahlung from a Worldline Quantum Field Theory,” Phys. Rev. Lett. 126 (2021) no.20, 201103 [arXiv:2101.12688 [gr-qc]]. Liu:2021zxr Z. Liu, R. A. Porto and Z. Yang, “Spin Effects in the Effective Field Theory Approach to Post-Minkowskian Conservative Dynamics,” JHEP 06 (2021), 012 [arXiv:2102.10059 [hep-th]]. Dlapa:2021npj C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, “Dynamics of binary systems to fourth Post-Minkowskian order from the effective field theory approach,” Phys. Lett. B 831 (2022), 137203 [arXiv:2106.08276 [hep-th]]. Jakobsen:2021lvp G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, “Gravitational Bremsstrahlung and Hidden Supersymmetry of Spinning Bodies,” Phys. Rev. Lett. 128 (2022) no.1, 011101 [arXiv:2106.10256 [hep-th]]. Jakobsen:2021zvh G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, “SUSY in the sky with gravitons,” JHEP 01 (2022), 027 [arXiv:2109.04465 [hep-th]]. Dlapa:2021vgp C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, “Conservative Dynamics of Binary Systems at Fourth Post-Minkowskian Order in the Large-Eccentricity Expansion,” Phys. Rev. Lett. 128 (2022) no.16, 161104 [arXiv:2112.11296 [hep-th]]. Jakobsen:2022fcj G. U. Jakobsen and G. Mogull, “Conservative and Radiative Dynamics of Spinning Bodies at Third Post-Minkowskian Order Using Worldline Quantum Field Theory,” Phys. Rev. Lett. 128 (2022) no.14, 141102 [arXiv:2201.07778 [hep-th]]. Jakobsen:2022psy G. U. Jakobsen, G. Mogull, J. Plefka and B. Sauer, “All things retarded: radiation-reaction in worldline quantum field theory,” JHEP 10 (2022), 128 [arXiv:2207.00569 [hep-th]]. Kalin:2022hph G. Kälin, J. Neef and R. A. Porto, “Radiation-reaction in the Effective Field Theory approach to Post-Minkowskian dynamics,” JHEP 01 (2023), 140 [arXiv:2207.00580 [hep-th]]. Dlapa:2022lmu C. Dlapa, G. Kälin, Z. Liu, J. Neef and R. A. Porto, “Radiation Reaction and Gravitational Waves at Fourth Post-Minkowskian Order,” Phys. Rev. Lett. 130 (2023) no.10, 101401 doi:10.1103/PhysRevLett.130.101401 [arXiv:2210.05541 [hep-th]]. Dlapa:2023hsl C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, “Bootstrapping the relativistic two-body problem,” [arXiv:2304.01275 [hep-th]]. Jakobsen:2022zsx G. U. Jakobsen and G. Mogull, “Linear response, Hamiltonian, and radiative spinning two-body dynamics,” Phys. Rev. D 107 (2023) no.4, 044033 doi:10.1103/PhysRevD.107.044033 [arXiv:2210.06451 [hep-th]]. Jakobsen:2023ndj G. U. Jakobsen, G. Mogull, J. Plefka, B. Sauer and Y. Xu, “Conservative scattering of spinning black holes at fourth post-Minkowskian order,” [arXiv:2306.01714 [hep-th]]. Manohar:2022dea A. V. Manohar, A. K. Ridgway and C. H. Shen, “Radiated Angular Momentum and Dissipative Effects in Classical Scattering,” Phys. Rev. Lett. 129 (2022) no.12, 121601 [arXiv:2203.04283 [hep-th]]. Damgaard:2023vnx P. H. Damgaard, E. R. Hansen, L. Planté and P. Vanhove, “The Relation Between KMOC and Worldline Formalisms for Classical Gravity,” [arXiv:2306.11454 [hep-th]]. Kalin:2019rwq G. Kälin and R. A. Porto, “From Boundary Data to Bound States,” JHEP 01 (2020), 072 [arXiv:1910.03008 [hep-th]]. Bjerrum-Bohr:2019kec N. E. J. Bjerrum-Bohr, A. Cristofoli and P. H. Damgaard, “Post-Minkowskian Scattering Angle in Einstein Gravity,” JHEP 08 (2020), 038 doi:10.1007/JHEP08(2020)038 [arXiv:1910.09366 [hep-th]]. Adamo:2022qci T. Adamo, A. Cristofoli, A. Ilderton and S. Klisch, “All order gravitational waveforms from scattering amplitudes,” [arXiv:2210.04696 [hep-th]]. Adamo:2022ooq T. Adamo and R. Gonzo, “Bethe-Salpeter equation for classical gravitational bound states,” JHEP 05 (2023), 088 doi:10.1007/JHEP05(2023)088 [arXiv:2212.13269 [hep-th]]. Brandhuber:2023hhy A. Brandhuber, G. R. Brown, G. Chen, S. De Angelis, J. Gowdy and G. Travaglini, “One-loop gravitational bremsstrahlung and waveforms from a heavy-mass effective field theory,” JHEP 06 (2023), 048 doi:10.1007/JHEP06(2023)048 [arXiv:2303.06111 [hep-th]]. Herderschee:2023fxh A. Herderschee, R. Roiban and F. Teng, “The sub-leading scattering waveform from amplitudes,” JHEP 06 (2023), 004 doi:10.1007/JHEP06(2023)004 [arXiv:2303.06112 [hep-th]]. Elkhidir:2023dco A. Elkhidir, D. O'Connell, M. Sergola and I. A. Vazquez-Holm, “Radiation and Reaction at One Loop,” [arXiv:2303.06211 [hep-th]]. Georgoudis:2023lgf A. Georgoudis, C. Heissenberg and I. Vazquez-Holm, “Inelastic exponentiation and classical gravitational scattering at one loop,” JHEP 06 (2023), 126 doi:10.1007/JHEP06(2023)126 [arXiv:2303.07006 [hep-th]]. Gonzo:2023cnv R. Gonzo and A. Ilderton, “Wave scattering event shapes at high energies,” [arXiv:2305.17166 [hep-th]]. Damgaard:2019lfh P. H. Damgaard, K. Haddad and A. Helset, “Heavy Black Hole Effective Theory,” JHEP 11 (2019), 070 doi:10.1007/JHEP11(2019)070 [arXiv:1908.10308 [hep-ph]]. Aoude:2020onz R. Aoude, K. Haddad and A. Helset, “On-shell heavy particle effective theories,” JHEP 05 (2020), 051 doi:10.1007/JHEP05(2020)051 [arXiv:2001.09164 [hep-th]]. Brandhuber:2021eyq A. Brandhuber, G. Chen, G. Travaglini and C. Wen, “Classical gravitational scattering from a gauge-invariant double copy,” JHEP 10 (2021), 118 doi:10.1007/JHEP10(2021)118 [arXiv:2108.04216 [hep-th]]. Brandhuber:2021bsf A. Brandhuber, G. Chen, H. Johansson, G. Travaglini and C. Wen, “Kinematic Hopf Algebra for Bern-Carrasco-Johansson Numerators in Heavy-Mass Effective Field Theory and Yang-Mills Theory,” Phys. Rev. Lett. 128 (2022) no.12, 121601 doi:10.1103/PhysRevLett.128.121601 [arXiv:2111.15649 [hep-th]]. Brandhuber:2022enp A. Brandhuber, G. R. Brown, G. Chen, J. Gowdy, G. Travaglini and C. Wen, “Amplitudes, Hopf algebras and the colour-kinematics duality,” JHEP 12 (2022), 101 doi:10.1007/JHEP12(2022)101 [arXiv:2208.05886 [hep-th]]. Lee:2012cn R. N. Lee, “Presenting LiteRed: a tool for the Loop InTEgrals REDuction,” [arXiv:1212.2685 [hep-ph]]. Lee:2013mka R. N. Lee, “LiteRed 1.4: a powerful tool for reduction of multiloop integrals,” J. Phys. Conf. Ser. 523 (2014), 012059 doi:10.1088/1742-6596/523/1/012059 [arXiv:1310.1145 [hep-ph]].
http://arxiv.org/abs/2307.05196v1
20230711120547
A simple model for self-propulsion of microdroplets in surfactant solution
[ "Swarnak Ray", "Arun Roy" ]
physics.flu-dyn
[ "physics.flu-dyn", "cond-mat.soft" ]
[email protected] Soft Condensed Matter Group, Raman Research Institute, Bangalore 560080, India We propose a simple active hydrodynamic model for the self-propulsion of a liquid droplet suspended in micellar solutions. The self-propulsion of the droplet occurs by spontaneous breaking of isotropic symmetry and is studied using both analytical and numerical methods. The emergence of self-propulsion arises from the slow dissolution of the inner fluid into the outer micellar solution as filled micelles. We propose that the surface generation of filled micelles is the dominant reason for the self-propulsion of the droplet. The flow instability is due to the Marangoni stress generated by the non-uniform distribution of the surfactant molecules on the droplet interface. In our model, the driving parameter of the instability is the excess surfactant concentration above the critical micellar concentration which directly correlates with the experimental observations. We consider various low-order modes of flow instability and show that the first mode becomes unstable through a supercritical bifurcation and is the only mode contributing to the swimming of the droplet. The flow fields around the droplet for these modes and their combined effects are also discussed. A simple model for self-propulsion of microdroplets in surfactant solution Arun Roy August 12, 2023 =========================================================================== § INTRODUCTION A common example of an artificial micro-swimmer is a liquid droplet dispersed in another immiscible fluid and propelled by self-generated Marangoni flow. This requires the creation of non-uniformity of surface tension which is maintained by a non-uniform distribution of surface active species on the interface. One way of creating such surface tension gradients is chemical reactions <cit.>. Another way of generating surface tension gradients is micellar solubilization in which a drop of one fluid slowly dissolves in a micellar solution of another fluid by forming filled micelles. Earliest reports of spontaneously generated convective flow fields by such solubilizing droplets can be found in <cit.>. But the authors did not perform a detailed study of self-propulsion. In recent years, several studies have focused on droplet motion due to micellar solubilization <cit.>. Some of these systems involve water droplets in solutions of a non-ionic surfactant in an organic oil <cit.>, while some others involve oil droplets suspended in aqueous solutions of ionic surfactants <cit.>. In all of these systems, it has been found that the drops simultaneously exhibit self-propulsion and dissolution above a sharp threshold total concentration of the surfactant in the outer fluid which is much greater than the critical micellar concentration (CMC). It has also been found that the inner fluid can be in the isotropic <cit.>, nematic <cit.> or cholesteric phases <cit.> while self-propulsion of smectic droplets has not been reported. It was also shown using fluorescence microscopy that the droplets leave behind a trail of filled micelles <cit.>. There are several models aiming to explain the self-propulsion of such droplets by proposing mechanisms for sustaining the non-uniform distribution of surfactants. Since there is no inherent asymmetry in such systems, one relies on the spontaneous symmetry breaking of isotropic surfactant distribution. Herminghaus et. al <cit.> proposed that the interface region of the droplet acts as a sink (source) for empty (filled) micelles. This leads to a radial gradient of the density of empty micelles in the steady state. This gradient plays the role of a driving parameter in their model for self-propulsion. The authors showed that in systems where the empty micelles collect solute molecules from the bulk, increasing the empty micelle gradient can result in increased surfactant concentration at the interface. This effect combined with small perturbations of droplet flow fields can lead to the desired self-propulsion of the droplet. However, the assumed far field gradient of the empty micelles in the outer fluids is not expected for such micrometer-sized droplets. In an attempt to formulate a generalized model, Morozov et al. <cit.> treated the droplet interface as a sink for monomers and assumed a fixed flux condition at the interface. Assuming a characteristic velocity scale associated with such a fixed flux, they formulated an intrinsic Peclet number (Pe) defined as the ratio of characteristic strength of advection and diffusion. They showed that beyond a critical value of this Peclet number, the nonlinear coupling between concentration and velocity field leads to self-propulsion. However, the mechanism of this characteristic fluid flow proportional to the influx of monomers is not clear. More recently Morozov et al. <cit.> attempted to treat the droplet interface as a source for swollen micelles and developed a similar model. However, the velocity scale used in defining Pe was introduced somewhat artificially in the model. Izzet et al. <cit.> showed that a radial gradient of surfactant monomers in the vicinity of the droplets can exist because of the possible lowering of CMC due to the presence of the small number of oil molecules dissolved near the interface. An infinitesimal perturbation in the droplet velocity can lead to anisotropy in surfactant concentration on the droplet interface, inducing a Marangoni flow, and at high enough dissolution rates, the droplet can propel itself. Most of the existing models take into account only the variation of surfactant concentration outside the droplet <cit.> as the mechanism of self-propulsion. One key aspect of such systems found experimentally is that droplet motion can only occur above a certain critical value of surfactant concentration in the bulk far exceeding the CMC. The excess concentration above the CMC produces micelles that are in a dynamic equilibrium with the monomers in the fluid. Therefore the monomer concentration in the bulk is expected to remain close to the CMC value for any deficiencies in monomer concentration should be quickly replenished by the dissociation of empty micelles. Only one of the available models takes this factor into account <cit.>. This model considers the transport of adsorbed monomers at the interface and assumes an explicit form of the filled micelle production rate from the interface. However, it still relies on symmetry breaking of species concentration in the bulk and does not provide a clear relation between activity and total surfactant concentration. The present study aims to provide a minimal model that directly correlates the onset of self-propulsion beyond a total bulk concentration of the surfactant and to show that self-propulsion could be achieved through interfacial processes alone. In section <ref>, we describe the geometry of the model system used in our calculations and the mathematical model developed to account for the self-propulsion of a droplet. The linear stability analysis of the model equations is discussed in section <ref>. In section <ref>, we describe the numerical methods used to solve the nonlinear transport equations. The results and conclusions are given in sections <ref> and <ref>, respectively. § MODEL The physical system consists of a swimming droplet of radius a slowly dissolving into a surfactant solution. The droplet interface is covered with surfactant molecules that can be transported along the interface due to diffusion and advection. We assume that the solubility of the surfactant molecules in the droplet's inner fluid is negligible and they mostly exist at the interface and the outer fluid. Surfactant molecules can exist in three forms in the outer fluid viz. as monomers with concentration denoted by C_1, as empty micelles, and as filled micelles that have acquired some molecules of the dissolving inner fluid. Both the inner and outer fluids are assumed to be Newtonian and incompressible in our model. The droplet is assumed to be moving in the outer fluid of an infinite extent with no externally imposed flow. The density and viscosity of the inner(outer) fluids are homogeneous and denoted as ρ̃(ρ) and μ̃(μ), respectively. Since the Reynolds number (Re) associated with these systems is often much less than unity, the inertia of the fluids and the drop is negligible. Hence the flow fields satisfy the Stokes equations. The flow fields are subjected to kinematic, dynamic, and stress balance conditions at the droplet interface. These boundary conditions along with the surfactant transport equation are used to solve for the velocity fields and the surfactant distribution on the droplet interface. There can be several different mechanisms for the solubilization of an oil/water droplet into a micellar solution. (a) It is possible that empty micelles directly collide with the droplet interface and collect solute molecules and diffuse into the bulk. (b) The empty micelles may acquire individual solute molecules from a diffused layer around the droplet near its interface. (c) There may be direct emission of solute-filled micelles from the droplet interface. The first possibility is not expected to be operative for systems with ionic surfactant molecules due to the electrostatic repulsion between the micelles and the droplet interface. The diffused layer thickness in the second process is also expected to be small for low solubility of the inner solute molecules in the outer liquid and surfactant concentrations in the outer liquid well above the CMC. For surfactant concentrations well above the CMC, the large number of empty micelles present in the outer liquid take away the solute molecules from the diffused layer and reduce it to a negligible thickness at a steady state. Hence the self-propulsion due to this process is unlikely. We assume that the spontaneous emission of filled micelles, from the interfacial monolayer of adsorbed surfactant molecules, is the dominant process contributing to the solubilization in our model. The rate of emission of filled micelles is postulated to be proportional to excess surfactant concentration C_e = (C_tot - C_m), where C_tot is the total surfactant concentration and C_m is the critical micellar concentration, respectively. The excess surfactant concentration C_e increases the propensity of holding the filled micelles in the outer fluid. This emission is expected to decrease the average interfacial surfactant concentration compared to that of a non-solubilizing drop in a micellar solution. We note that minute perturbation of surfactant distribution leads to small amplitude flow fields near the droplet interface. We propose that in regions of negative surface divergence of these flow fields, there is compression of the monolayer which in turn facilitates the formation of filled micelles from those regions. On the other hand, in regions of positive surface divergence, the stretching of the monolayer hinders the emission of swollen micelles. It is found that the droplet dissolution rates are usually linear which hints that it could be a surface-dominated process <cit.>. The model equations are made dimensionless by performing the following transformations of the relevant variables. The radial distance is measured in units of the droplet radius a giving the dimensionless form r^* = r/a. The dimensionless time t^*=t/( a^2/D_s), where D_s is the molecular diffusivity of surfactant molecules at the interface. The dimensionless interfacial surfactant concentration, bulk monomer concentration, and bulk excess surfactant concentration are defined as Γ^*=Γ/ Γ_m, C^*_1= C_1/C_m, C^*_e = C_e/C_m respectively, where Γ_m is the maximum possible interface concentration. The dimensionless surface tension σ^*=σ/ σ_0, where σ_0 is the surface tension of a clean interface. The fluid velocity and pressure are made dimensionless as u^*= u/(D_s/a) and p^*= p/(μ D_s/a^2). The other parameters of the model appearing in eqn. (<ref>) below are made dimensionless using k_a^* = k_a a^2 C_m/Γ_m, k_d^* = k_d a^2/D_s, e_1^* = e_1 C_m a^2/(D_sΓ_m) , e_2^* = e_2 C_m/Γ_m. For convenience, we henceforth denote the dimensionless parameters and variables without the superscript star. In the dimensionless form the momentum transport and continuity equations for the incompressible outer/inner fluids for low Reynolds number can be written as, ∇^2 u = ∇ p ; ∇· u = 0 ν∇^2ũ = ∇p̃ ; ∇·ũ = 0 where, {u,p}({ũ,p̃}) represent the dimensionless velocity and pressure fields of outer(inner) fluids respectively. All the bulk material properties of the fluids are taken to be constant and the gravitational effects are negligible. The self-propulsion of the droplet along a certain direction occurs with the spontaneous breaking of the isotropic symmetry. We assume the flow field around the droplet is axisymmetric and solve the equations in the droplet rest frame using a spherical polar coordinate system as shown in fig. <ref>. Without loss of generality, the droplet is assumed to be moving along the negative z-axis in the lab frame, so that in the droplet rest frame the far-field velocity takes the form, u→ U ẑ where U is the magnitude of droplet velocity in the lab frame and ẑ is the unit vector along the polar axis. The dimensionless boundary conditions at the droplet surface, r=1 can be written as, u_r = ũ_r =0 ,u_θ = ũ_θ which represent the vanishing of the normal component of velocity due to the impenetrability of the interface and the continuity of the tangential component of velocity, respectively. We also neglect any small shape changes in the droplet. The dimensionless form of the advection-diffusion equation for the interfacial surfactant concentration Γ can be written as, ∂Γ/∂ t+∇_s·(u_sΓ)=∇_s^2Γ +k_a C_1(1-Γ)-k_dΓ -(e_1-e_2 ∇_s·u_s) C_e where the terms on the left-hand side of eqn. (<ref>) represent the explicit time derivative and advection of surfactants, respectively. The operator ∇_s represents the surface gradient on the droplet interface. The first term on the right-hand side represents the molecular diffusion of adsorbed surfactants on the interface. The second (third) term on the right-hand side represents the adsorption (desorption) of monomers at the interface from(to) the outer fluid with the dimensionless rate coefficient k_a(k_d). The last term in eqn. (<ref>) takes into account the spontaneous emission of filled micelles from the interface. The first part of this term with coefficient e_1 represents an isotropic emission independent of flow and the other part with coefficient e_2 represents the emission contribution depending on the flow as discussed earlier. We assume that the bulk monomer concentration C_1 remains homogeneous and constant at C_m. For simplicity, we consider a linear relationship between surface tension and interfacial surfactant concentration as σ(Γ)= 1 - R T Γ_m/σ_0Γ, where R is the ideal gas constant and T is the absolute temperature. The tangential stress component due to the Marangoni effect is discontinuous across the interface whenever ∇_sσ is non-zero and this boundary condition in the spherical polar coordinate frame can be written as, ν(-ũ_θ/r+∂ũ_θ/∂ r) -(-u_θ/r+∂ u_θ/∂ r) = -M ∂Γ/∂θ where M=RT Γ_m a/D_s μ is the Marangoni number and ν = μ̃/μ is the ratio of the dynamic viscosities of the inner and outer fluids, respectively. The hydrodynamic equations given by eqn. (<ref>) and eqn. (<ref>) under axisymmetric conditions can be solved using a stream function formulation with the superposition of different orthogonal modes <cit.>. Noting that there is no external body force acting on the droplet, the solutions for the radial and tangential components of the fluid velocity field in the droplet rest frame can be written as <cit.>, u_r = U(1-1/r^3)η + ∑_n=2^∞α_n n(n+1)(r^-n-2-r^-n)P_n(η) u_θ = -U(1+1/2r^3)√(1-η^2) + ∑_n=2^∞α_n n(n+1)((2-n)r^-n+nr^-n-2)G_n+1(η)/√(1-η^2) ũ_r = -3U/2(1-r^2) η - ∑_n=2^∞α_n n(n+1)(r^n+1-r^n-1)P_n(η) ũ_θ = 3U/2(1-2r^2) √(1-η^2)+ ∑_n=2^∞α_n n(n+1)((n+3)r^n+1-(n+1)r^n-1) G_n+1(η)/√(1-η^2) where η = cosθ and the functions P_n(η), G_n(η) are the Legendre polynomial of degree n and the Gegenbauer polynomial of order n and degree -1/2, respectively. Substituting these expressions for the velocity fields in the stress balance condition Eq. (<ref>) and using the orthogonality properties of the Legendre and the Gegenbauer polynomials, the far-field flow speed can be written as, U = M/3ν+2∫_0^π∂Γ/∂θ G_2 dθ On the other hand, the flow amplitudes α_n of the higher order modes for n≥ 2 can be found as, α_n = -M/4(ν+1)∫_0^π∂Γ/∂θ G_n+1 dθ § LINEAR STABILITY ANALYSIS The linear stability analysis was performed on the nonlinear equations of the model to determine the threshold value of the control parameter above which the reference state becomes unstable. In the reference state, the droplet has a uniform distribution of surfactants on the interface with zero flow in the inner and outer fluids. The linear stability analysis was carried out by introducing small perturbation of order ϵ to the dependent variables from their reference values as, u = 0 + ϵ u ũ = 0 +ϵũ Γ = Γ_0 + ϵΓ_1 The different orders of approximation can be obtained by substituting the variables from eqn. (<ref>) into the equations (<ref>) - (<ref>) and collecting the terms of the same powers of ϵ. The zeroth order approximation gives the uniform surfactant concentration on the droplet interface in the reference state as, Γ_0 = k_a-e_1 C_e/k where k = k_a+k_d. In the first order approximation, Γ_1 satisfies the transport equation, ∂Γ_1/∂ t = ∇^2 Γ_1 - k Γ_1 + ∇_s . u_s(e_2 C_e-Γ_0) and the flow velocities in the fluids satisfy Stokes equations. Then the solutions to the fluid velocity components can be written in the form of eqns. (<ref>) - (<ref>). Assuming Γ_1 can be expanded as Γ_1 = ∑_n=1^∞b_n(t)P_n(cosθ), we solve the resulting time evolution equations for the mode amplitudes b_n(t). Then using eqn. (<ref>) and eqn. (<ref>), the amplitudes for the flow velocities can be written as, U = -2Mb_1/3(3ν+2) α_n = Mb_n/2(ν+1)(2n+1) for n≥ 2 , For the first mode (n=1), the amplitude b_1(t) satisfies, d b_1/d t = [2M/3ν + 2(e_2 C_e - Γ_0) - (2+k) ] b_1 which gives b_1∝ e^λ_1 t with the growth exponent λ_1 = 2 M/3 ν + 2(e_2 C_e - Γ_0) - (2+k) For λ_1 > 0, the reference motionless state becomes unstable to the swimming mode (n=1) giving the threshold excess concentration as C_e1 = k(2+k)(3ν+2)+2Mk_a/2M(ke_2+e_1). Similarly for higher order modes with n≥ 2, the time evolution equation of the mode amplitude b_n(t) can be written as, d b_n/d t = [Mn(n+1)/(ν+1)(2n+1)(e_2C_e-Γ_0)-{n(n+1)+k} ] b_n and the threshold excess concentration above which the n-th mode becomes unstable is given by, C_en = k{n(n+1)+k}(ν+1)(2n+1)+k_aMn(n+1)/n(n+1)M(ke_2+e_1) It should be noted that only mode 1 gives rise to the net propulsion of the droplet. The higher-order modes though produce flow around the droplet, do not give rise to the net propulsion of the droplet as discussed below. § NONLINEAR NUMERICAL ANALYSIS The linear stability analysis gives the threshold values of the excess concentration C_e above which reference motionless state becomes unstable to different instability modes. Above the threshold, these modes initially grow exponentially with time but become saturated at long times due to the non-linear effects. Hence the full non-linear model equations need to be solved to study the long-time behaviour of these modes. The non-linear surfactant transport equation was solved numerically using a forward time central space (FTCS) finite difference scheme to find the saturation values of the mode amplitudes above the threshold. In this numerical method, the surface velocity field given by eqn. (<ref>) for modes under consideration was substituted in eqn. (<ref>). The resultant equation was discretized for the spatial derivatives using a second-order central difference scheme and the time integration was performed using the forward Euler method. Because of the assumed axisymmetry of the problem, eqn. (<ref>) can be solved in the half-space 0≤θ≤π. The condition of axisymmetry also requires that the diffusive flux of the surfactants on the drop interface be zero at the poles θ = 0 and θ =π. In our numerical scheme, this was accomplished by using ghost points outside the range of θ. L'Hospital's rule was used to remove the singularity in the diffusive term (∇_s^2Γ) at the poles. For the first mode, the solution was advanced in time assuming an initial form of the interfacial surfactant concentration Γ= Γ_0 -ϵ_1 P_1(cosθ), where ϵ_1 is a small amplitude perturbation to the uniform concentration. The solution was evolved until the surface concentration reached a steady distribution and the velocity amplitudes reached a saturation value. The same method is used for mode 2 and for the combined mode with suitable forms for the initial perturbations. To test the accuracy of the above method, we solved eqn. (<ref>) using one other scheme: Forward Euler in time, explicit treatment of advection term, and implicit treatment of diffusion term using Crank-Nicholson method with a second-order central difference for spatial derivatives. Both methods gave very similar results with negligible differences in the solutions. All the computations for different values of the parameters were carried out using the relatively faster FTCS method. § RESULTS The model equations were solved numerically considering the low-order modes of velocity fields which are the dominant modes controlling the hydrodynamic signature of these micro-swimmers. The following values of the dimensionless model parameters are used in the calculations: M = 24775, ν = 60.0, k_a = 0.4, k_d = 0.0, e_1 = 0.0048, e_2 = 0.008. Below we discuss the results obtained for different instability modes. §.§ Mode 1 For the first mode, the non-uniform surfactant distribution is given by the Legendre polynomial of degree one and has vectorial symmetry. The velocity fields corresponding to this mode are given by the first term in eqns. (<ref>) - (<ref>). This mode gives rise to net self-propulsion of the droplet consistent with its vectorial symmetry. The eqn. (<ref>) shows the expression for the threshold value C_e1 of the driving parameter from the linear stability analysis which agrees with the non-linear analysis for the above values of the model parameters. Fig. <ref>a shows the time evolution of the surfactant concentration profile and the velocity field u_θ on the droplet surface for C_e=50.493 which is slightly above the threshold value. Both Γ and u_θ become increasingly non-uniform and tend to a steady state profile at long times. The velocity profile u_θ peaks at θ = π/2 whereas Γ decreases at θ = 0 and increases at θ = π. Accordingly, the droplet propulsion speed U increases with time from zero to a steady state value (see inset of fig. <ref>b). The steady-state droplet propulsion speed U^s increases with increasing values of the driving parameter C_e as shown in fig. <ref>b. Very close to the onset of the instability, U^s grows as (C_e- C_e1)^0.55 in our numerical model indicating that the instability corresponding to mode 1 has the signature of a supercritical bifurcation. The steady-state velocity profiles of the inner and outer fluids in the droplet rest frame are shown in Fig. <ref> which are axially symmetric about the propulsion direction. The flow profile has a far-field velocity in the droplet rest frame, implying that the droplet has net propulsion in the laboratory frame. The propulsion can be understood as follows. Small amplitude deviations in the surfactant concentration from its uniform value on the interface give rise to variations in the interfacial tension which generates a Marangoni flow by spontaneous breaking of isotropic symmetry. The Marangoni flow requires that the surfactant concentration at the front is slightly greater than that at the rear end. This induces a negative divergence of the in-plane flow field at the rear end and a positive divergence at the front end. According to our proposition, there is a greater probability of the emission of swollen micelles from the regions of negative divergence. These swollen micelles take away some surfactant molecules from the interface. This process tends to enhance the mode amplitude and is the source of activity in the system. On the other hand, the diffusion and advection processes tend to homogenize any non-uniformity in interfacial surfactant concentration. Above a critical value of the control parameter C_e, the droplet can maintain a lesser surfactant concentration at the trailing end and a higher concentration at the leading end and the resulting Marangoni flow. It is important to note that the total surfactant concentration in the outer fluid is the control parameter in our model as found in experimental studies. §.§ Mode 2 and combined Modes 1 & 2 Similarly, mode 2 corresponds to a surfactant distribution given by the Legendre polynomial of degree two. This mode has quadrupolar symmetry and gives rise to a steady extensile flow instead of the self-propulsion of the droplet. The threshold value of the driving parameter C_e obtained from the linear stability analysis is given by eqn. (<ref>) for n=2. For the parameter values used in our model, the threshold value of C_e for mode 2 is found to be 50.656, which is slightly greater than that for mode 1. It is found that mode 1 is always the first mode to get activated in our model. The nonlinear analysis for mode 2 was performed for the driving parameter C_e= 50.666 which is slightly above the threshold value. The steady-state profiles of surfactant distribution and tangential velocity are shown in fig <ref>a. For mode 2, since there is now negative surface divergence at both poles, the surfactant concentration at the poles is lower compared to the equatorial region and the distribution is symmetric about the equator. The tangential velocity profile at the droplet interface has the same magnitude but opposite direction about the equator. Above the threshold value of the control parameter, the flow amplitudes α_2 grows from zero to steady state values as shown in the inset of fig. <ref>b. The variation of steady-state flow amplitude α_2^s with the control parameter C_e is shown in fig. <ref>b. The amplitude α_2^s increases from zero with a jump discontinuity at the onset of the instability indicating a subcritical bifurcation for this mode. The corresponding steady-state velocity profile in the droplet rest frame is shown in fig. <ref>a. The flow fields around the droplet have axial symmetry about the z-axis and mirror symmetry about the equatorial plane. We also consider the excitation of these first two modes simultaneously in the system and studied the resulting steady-state surfactant distribution and flow profiles as shown in fig. <ref>a. In the combined mode, the magnitude of u_θ peaks closer to the rear stagnation point and the flow field is no longer mirror symmetric about θ = π/2. Above a threshold value of the control parameter, both the flow amplitudes grow from zero to steady state values as shown in the inset of fig. <ref>c. We find that the droplet swimming with the combined modes has the hydrodynamic signature of a pusher which has been established by recent experiments <cit.>. For the combined modes the steady-state velocity profile around the droplet for C_e = 50.593 is shown in fig. <ref>b. The combined mode gives rise to non-zero extensional flow contributions in addition to the propulsion even below the linear stability threshold for α_2 as was observed for an isotropic phoretic particle <cit.>. We also observe that, when both modes are considered in the flow field, the first mode does not get activated for any value of C_e if the initial perturbation to the surfactant distribution does not have the P_1(cosθ) term while the second mode gets activated even with zero initial perturbation corresponding to it. § CONCLUSION We propose a simple model for swimming active droplets suspended in a micellar solution. Our hydrodynamic model predicts the existence of a sharp instability threshold towards self-propulsion of the droplet in terms of total surfactant concentration in the micellar solution which agrees well with the experimental observations <cit.>. Linear stability analysis was performed analytically to determine the instability threshold and full nonlinear equations were solved numerically to find the steady-state flow field in the fluids. The theoretically calculated instability threshold agrees qualitatively with the experimentally determined values. Unlike the previous models which take into account only the gradient of surfactant concentration outside the droplet for the mechanism of self-propulsion, we show that self-propulsion could be achieved through interfacial processes alone with the spontaneous breaking of spherical symmetry. The direct emission of swollen micelles from the droplet interface is the dominant self-propulsion mechanism in our model. The experimental observation of a trail of filled micelles left behind by the moving droplet supports this mechanism. 34 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Banno et al.(2012)Banno, Kuroha, and Toyota]banno2012ph author author T. Banno, author R. Kuroha, and author T. Toyota, @noop journal journal Langmuir volume 28, pages 1190 (year 2012)NoStop [Ban et al.(2013)Ban, Yamagami, Nakata, and Okano]ban2013ph author author T. Ban, author T. Yamagami, author H. Nakata, and author Y. Okano, @noop journal journal Langmuir volume 29, pages 2554 (year 2013)NoStop [Kitahata et al.(2011)Kitahata, Yoshinaga, Nagai, and Sumino]kitahata2011sp author author H. Kitahata, author N. Yoshinaga, author K. H. Nagai, and author Y. Sumino, @noop journal journal Physical Review E volume 84, pages 015101 (year 2011)NoStop [Kasuo et al.(2019)Kasuo, Kitahata, Koyano, Takinoue, Asakura, and Banno]Kasuo2019 author author Y. Kasuo, author H. Kitahata, author Y. Koyano, author M. Takinoue, author K. Asakura, and author T. Banno, @noop journal journal Langmuir volume 35, pages 13351 (year 2019)NoStop [Suematsu et al.(2021)Suematsu, Mori, Amemiya, and Nakata]Suematsu2021 author author N. J. Suematsu, author Y. Mori, author T. Amemiya, and author S. Nakata, @noop journal journal The Journal of Physical Chemistry Letters volume 12, pages 7526 (year 2021)NoStop [Suematsu et al.(2016)Suematsu, Mori, Amemiya, and Nakata]Suematsu2016 author author N. J. Suematsu, author Y. Mori, author T. Amemiya, and author S. Nakata, @noop journal journal The journal of physical chemistry letters volume 7, pages 3424 (year 2016)NoStop [Suematsu et al.(2019)Suematsu, Saikusa, Nagata, and Izumi]Suematsu2019 author author N. J. Suematsu, author K. Saikusa, author T. Nagata, and author S. Izumi, @noop journal journal Langmuir volume 35, pages 11601 (year 2019)NoStop [Thutupalli et al.(2011)Thutupalli, Seemann, and Herminghaus]Thutupalli2011 author author S. Thutupalli, author R. Seemann, and author S. Herminghaus, @noop journal journal New Journal of Physics volume 13, pages 073021 (year 2011)NoStop [Thutupalli and Herminghaus(2013)]thutupalli2013tuning author author S. Thutupalli and author S. Herminghaus, @noop journal journal The European Physical Journal E volume 36, pages 1 (year 2013)NoStop [Schmitt and Stark(2013)]Schmitt2013 author author M. Schmitt and author H. Stark, https://doi.org/10.1209/0295-5075/101/44008 journal journal Europhysics Letters volume 101, pages 44008 (year 2013)NoStop [Yoshinaga et al.(2012)Yoshinaga, Nagai, Sumino, and Kitahata]Yoshinaga2012 author author N. Yoshinaga, author K. H. Nagai, author Y. Sumino, and author H. Kitahata, https://doi.org/10.1103/PhysRevE.86.016108 journal journal Phys. Rev. E volume 86, pages 016108 (year 2012)NoStop [Chen et al.(1997)Chen, Miller, and Garrett]chen1997rates author author B.-H. Chen, author C. A. Miller, and author P. R. Garrett, @noop journal journal Colloids and Surfaces A: Physicochemical and Engineering Aspects volume 128, pages 129 (year 1997)NoStop [Chen et al.(1998)Chen, Miller, and Garrett]chen1998rates author author B.-H. Chen, author C. A. Miller, and author P. R. Garrett, @noop journal journal Langmuir volume 14, pages 31 (year 1998)NoStop [Peña and Miller(2006)]pena2006 author author A. A. Peña and author C. A. Miller, @noop journal journal Advances in colloid and interface science volume 123, pages 241 (year 2006)NoStop [Peddireddy et al.(2012)Peddireddy, Kumar, Thutupalli, Herminghaus, and Bahr]Peddireddy2012 author author K. Peddireddy, author P. Kumar, author S. Thutupalli, author S. Herminghaus, and author C. Bahr, https://doi.org/10.1021/la3015817 journal journal Langmuir volume 28, pages 12426 (year 2012), note pMID: 22799600, https://arxiv.org/abs/https://doi.org/10.1021/la3015817 https://doi.org/10.1021/la3015817 NoStop [Izri et al.(2014)Izri, Van Der Linden, Michelin, and Dauchot]Izri2014 author author Z. Izri, author M. N. Van Der Linden, author S. Michelin, and author O. Dauchot, @noop journal journal Physical review letters volume 113, pages 248302 (year 2014)NoStop [Herminghaus et al.(2014)Herminghaus, Maass, Krüger, Thutupalli, Goehring, and Bahr]Herminghaus2014 author author S. Herminghaus, author C. C. Maass, author C. Krüger, author S. Thutupalli, author L. Goehring, and author C. Bahr, @noop journal journal Soft matter volume 10, pages 7008 (year 2014)NoStop [Jin et al.(2017)Jin, Krüger, and Maass]Jin2017 author author C. Jin, author C. Krüger, and author C. C. Maass, @noop journal journal Proceedings of the National Academy of Sciences volume 114, pages 5089 (year 2017)NoStop [Krüger et al.(2016)Krüger, Klös, Bahr, and Maass]Kruger2016 author author C. Krüger, author G. Klös, author C. Bahr, and author C. C. Maass, @noop journal journal Physical review letters volume 117, pages 048003 (year 2016)NoStop [Suga et al.(2018)Suga, Suda, Ichikawa, and Kimura]Suga2018 author author M. Suga, author S. Suda, author M. Ichikawa, and author Y. Kimura, @noop journal journal Physical Review E volume 97, pages 062703 (year 2018)NoStop [Moerman et al.(2017)Moerman, Moyses, Van Der Wee, Grier, Van Blaaderen, Kegel, Groenewold, and Brujic]Moerman2017 author author P. G. Moerman, author H. W. Moyses, author E. B. Van Der Wee, author D. G. Grier, author A. Van Blaaderen, author W. K. Kegel, author J. Groenewold, and author J. Brujic, @noop journal journal Physical Review E volume 96, pages 032607 (year 2017)NoStop [Izzet et al.(2020)Izzet, Moerman, Gross, Groenewold, Hollingsworth, Bibette, and Brujic]izzet2020 author author A. Izzet, author P. G. Moerman, author P. Gross, author J. Groenewold, author A. D. Hollingsworth, author J. Bibette, and author J. Brujic, @noop journal journal Physical Review X volume 10, pages 021035 (year 2020)NoStop [Suda et al.(2021)Suda, Suda, Ohmura, and Ichikawa]Suda2021 author author S. Suda, author T. Suda, author T. Ohmura, and author M. Ichikawa, https://doi.org/10.1103/PhysRevLett.127.088005 journal journal Phys. Rev. Lett. volume 127, pages 088005 (year 2021)NoStop [Dwivedi et al.(2021)Dwivedi, Si, Pillai, and Mangal]Dwivedi2021 author author P. Dwivedi, author B. R. Si, author D. Pillai, and author R. Mangal, https://doi.org/10.1063/5.0038716 journal journal Physics of Fluids volume 33, pages 022103 (year 2021), https://arxiv.org/abs/https://doi.org/10.1063/5.0038716 https://doi.org/10.1063/5.0038716 NoStop [Hokmabad et al.(2021)Hokmabad, Dey, Jalaal, Mohanty, Almukambetova, Baldwin, Lohse, and Maass]Hokmabad2021 author author B. V. Hokmabad, author R. Dey, author M. Jalaal, author D. Mohanty, author M. Almukambetova, author K. A. Baldwin, author D. Lohse, and author C. C. Maass, @noop journal journal Phys. Rev. X volume 11, pages 011043 (year 2021)NoStop [Yamamoto and Sano(2017)]Yamamoto2017 author author T. Yamamoto and author M. Sano, https://doi.org/10.1039/C7SM00337D journal journal Soft Matter volume 13, pages 3328 (year 2017)NoStop [Castonguay et al.(2023)Castonguay, Kailasham, Wentworth, Meredith, Khair, and Zarzar]Castonguay2023 author author A. C. Castonguay, author R. Kailasham, author C. M. Wentworth, author C. H. Meredith, author A. S. Khair, and author L. D. Zarzar, https://doi.org/10.1103/PhysRevE.107.024608 journal journal Phys. Rev. E volume 107, pages 024608 (year 2023)NoStop [Jin et al.(2021)Jin, Chen, Maass, and Mathijssen]Jin2021 author author C. Jin, author Y. Chen, author C. C. Maass, and author A. J. T. M. Mathijssen, https://doi.org/10.1103/PhysRevLett.127.088006 journal journal Phys. Rev. Lett. volume 127, pages 088006 (year 2021)NoStop [Morozov and Michelin(2019a)]Morozov2019nonlinear author author M. Morozov and author S. Michelin, @noop journal journal The Journal of chemical physics volume 150, pages 044110 (year 2019a)NoStop [Morozov and Michelin(2019b)]Morozov2019self author author M. Morozov and author S. Michelin, @noop journal journal Journal of Fluid Mechanics volume 860, pages 711 (year 2019b)NoStop [Morozov(2020)]Morozov2020 author author M. Morozov, @noop journal journal Soft Matter volume 16, pages 5624 (year 2020)NoStop [Leven and Newman(1976)]Leven1976 author author M. D. Leven and author J. Newman, @noop journal journal AIChE Journal volume 22, pages 695 (year 1976)NoStop [Leal(2007)]Leal2007 author author L. G. Leal, @noop title Advanced transport phenomena: fluid mechanics and convective transport processes, Vol. volume 7 (publisher Cambridge University Press, year 2007)NoStop [Michelin et al.(2013)Michelin, Lauga, and Bartolo]Michelin2013 author author S. Michelin, author E. Lauga, and author D. Bartolo, https://doi.org/10.1063/1.4810749 journal journal Physics of Fluids volume 25, pages 061701 (year 2013), https://arxiv.org/abs/https://doi.org/10.1063/1.4810749 https://doi.org/10.1063/1.4810749 NoStop
http://arxiv.org/abs/2307.04421v2
20230710085412
Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference
[ "Lei Li", "Julia Camps", "Zhinuo", "Wang", "Abhirup Banerjee", "Marcel Beetz", "Blanca Rodriguez", "Vicente Grau" ]
eess.SP
[ "eess.SP", "cs.CV", "eess.IV" ]
Towards Enabling Cardiac Digital Twins of Myocardial Infarction Using Deep Computational Models for Inverse Inference Lei Li, Julia Camps, Zhinuo (Jenny) Wang, Abhirup Banerjee, Marcel Beetz, Blanca Rodriguez, and Vicente Grau Corresponding author: Lei Li (e-mail: [email protected]). This work was supported by the CompBioMed 2 Centre of Excellence in Computational Biomedicine (European Commission Horizon 2020 research and innovation programme, grant agreement No. 823712). L. Li was partially supported by the SJTU 2021 Outstanding Doctoral Graduate Development Scholarship. A. Banerjee is a Royal Society University Research Fellow and is supported by the Royal Society Grant No. URF\R1\221314. The work of A. Banerjee and V. Grau was partially supported by the British Heart Foundation Project under Grant PG/20/21/35082. Lei Li, Abhirup Banerjee, Marcel Beetz, and Vicente Grau are with the Department of Engineering Science, University of Oxford, Oxford, UK. Julia Camps, Zhinuo (Jenny) Wang, and Blanca Rodriguez are with the Department of Computer Science, University of Oxford, Oxford, UK. Received / Accepted ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Myocardial infarction (MI) demands precise and swift diagnosis. Cardiac digital twins (CDTs) have the potential to offer individualized evaluation of cardiac function in a non-invasive manner, making them a promising approach for personalized diagnosis and treatment planning of MI. The inference of accurate myocardial tissue properties is crucial in creating a reliable CDT platform, and particularly in the context of studying MI. In this work, we investigate the feasibility of inferring myocardial tissue properties from the electrocardiogram (ECG), focusing on the development of a comprehensive CDT platform specifically designed for MI. The platform integrates multi-modal data, such as cardiac MRI and ECG, to enhance the accuracy and reliability of the inferred tissue properties. We perform a sensitivity analysis based on computer simulations, systematically exploring the effects of infarct location, size, degree of transmurality, and electrical activity alteration on the simulated QRS complex of ECG, to establish the limits of the approach. We subsequently propose a deep computational model to infer infarct location and distribution from the simulated QRS. The in silico experimental results show that our model can effectively capture the complex relationships between the QRS signals and the corresponding infarct regions, with promising potential for clinical application in the future. The code will be released publicly once the manuscript is accepted for publication. Cardiac digital twins, myocardial infarction, inverse problem, cardiac MRI, QRS, multi-modal integration. § INTRODUCTION Myocardial infarction (MI) is a major cause of mortality and disability worldwide <cit.>. Assessment of myocardial viability is essential in the diagnosis and treatment management for patients suffering from MI. In particular, the location and distribution of myocardial scars provide important information for patient selection and treatment planning. Late gadolinium enhancement (LGE) magnetic resonance imaging (MRI) has been widely used to characterize myocardial scars <cit.>. However, the incorporation of LGE into MRI examination prolongs scan times, and has potential side effects <cit.>. Recent studies have tried to delineate scars using non-enhanced cine MRI, with promising preliminary results <cit.>. Alternatively, the electrocardiogram (ECG) can be used to reveal abnormalities related in electrophysiology post-MI <cit.>. For example, ST-segment elevation and T-wave inversion are commonly used indicators of cardiac remodeling associated with different stages of MI <cit.>. In contrast, QRS patterns have received less attention in the literature, though they also provide valuable information about the extent and location of myocardial damage following an MI <cit.>. It is still partly unclear how QRS abnormalities reflect MI characteristics, such as location, size, transmural extent, and cardiac electrical activity alterations. Therefore, a reliable technique to detect and delineate infarct regions combining non-enhanced imaging and QRS data is highly desirable. Cardiac “digital twin" (CDT) technology can create virtual models of the heart combining cardiac images, ECG, and other subject-specific information <cit.>. It allows clinicians to visualize and analyze the structure, function, and electrical activity of the heart in real-time, providing valuable insights into the underlying mechanisms of MI <cit.>. As fig:intro:CDT shows, CDT workflows usually involve two stages, namely anatomical and functional twinnings, which present various challenges to overcome <cit.>. The anatomical twinning stage involves the segmentation of cardiac images, reconstruction of the 3D geometry of the heart, and the identification and extraction of relevant anatomical structures. It is complicated by the variability in the heart's anatomy across individuals, as well as by imaging artifacts and noise. At the functional twinning stage, the main challenge is to solve the inverse problem of electrocardiography, i.e. inferring electrophysiological properties in the myocardium from the ECG. This is complicated by the limitations of ECG recordings, which are sparse, noisy, and subject to substantial uncertainties. To solve the inverse problem, state-of-the-art approaches can be coarsely separated into two kinds: deterministic and probabilistic methods <cit.>. Deterministic approaches in cardiac electrophysiology involve minimizing a cost function that quantifies the discrepancy between the observed data and the model predictions. For robust inverse, spatial and/ or temporal regularization <cit.> and physics-informed regularization <cit.> have been widely used. Probabilistic methods rely on Bayesian inference theory and numerical techniques to generate posterior distributions for the model parameters <cit.>. They can incorporate prior knowledge into the parameter estimation with an uncertainty, which can be used to guide decision-making and assess the robustness of the results <cit.>. Nevertheless, conventional probabilistic methods are usually computationally expensive, as repeated numerical simulations are required to generate samples for the posterior distribution. Recently, deep learning based probabilistic methods have emerged as an alternative to conventional methods for modeling complex dynamics of cardiac electrical activity. They can leverage deep neural networks to approximate the posterior distribution of the model parameters or latent variables, providing faster and more accurate approximations. For example, Ghimire et al. <cit.> proposed a deep generative model to reconstruct cardiac transmembrane potential from ECG data. Li et al. <cit.> designed a deep computational model for the inverse inference of ventricular activation properties in a non-invasive and efficient manner. Xie et al. <cit.> employed a physics-constrained deep learning framework to inversely predict the heart-surface electrical signals from body surface potential maps. Sahli et al. <cit.> developed physics-information neural networks for the reconstruction of activation maps in cardiac electrophysiology. Dhamala et al. <cit.> proposed a generative variational autoencoder for parameter estimation of a personalized cardiac model. In addition to inferring the electrophysiological properties under sinus rhythm, several studies tried to investigate the propagation of cardiac electrical signals under arrhythmias based on deep neural networks. For example, Meister et al. <cit.> employed graph convolutional neural networks to estimate the depolarization patterns in the myocardium with scars. Bacoyannis et al. <cit.> reconstructed activation patterns of the myocardium with various local wall thicknesses, as thin walls indicate infarct regions. However, with regards to different post-MI scenarios, the inverse inference of electrophysiological heterogeneity in the infarct regions has not been fully investigated. In this work, we develop a deep computational model for the inverse inference of post-MI with different properties, varying the infarct location, size, and transmural extent. We first conduct a sensitivity analysis to investigate the relationship between QRS abnormalities and infarct characteristics in post-MI. This analysis provides insights into how variations in QRS signals are associated with specific infarct properties, informing the subsequent inverse inference process. The framework can efficiently combine the anatomical properties from cine MRI and electrophysiological information from QRS simulated via a biventricular electromechanical model of post-MI. This study provides an integrated and personalised perspective that incorporates the features from multi-modal data to predict tissue properties of post-MI, enabling the construction of a CDT platform. To the best of our knowledge, this is the first deep learning based computational model that addresses the inverse inference of MI with different characteristics. § METHODOLOGY §.§ Anatomical Twinning: Mesh Reconstruction At the anatomical twinning stage, we reconstruct a subject-specific 3D torso-biventricular tetrahedral mesh from multi-view cardiac MRIs <cit.>. Specifically, for the biventricular reconstruction, we first use a deep learning based ventricle segmentation from long- and short-axis cardiac MRIs and thus obtain sparse 3D contours. We then perform a misalignment correction based on the intensity and contour information coupled with a statistical shape model, followed by a surface mesh reconstruction and volumetric tetrahedral mesh generation. We utilize a two-step automated framework for the torso reconstruction, and the locations of the ECG electrodes (I, II, V1-V6, LA, RA, LL, RL) are measured from the personalized 3D torso mesh. To ensure a symmetric, consistent, and intuitive biventricular representation across various geometries, we project the biventricular mesh into a consistent biventricular coordinates (Cobiveco) system <cit.>. The Cobiveco system is defined by (tm, ab, rt, tv), which correspond to transmural, apicobasal, rotational, and transventricular coordinates, respectively. The reader is referred to the anatomical twinning stage of fig:intro:CDT for the illustration of Cobiveco (tv is excluded there). We represent infarct areas in the myocardium as an ellipse with radii r_tm, r_ab, and r_rt as follows, (tm_i - tm_0)^2/r_tm^2 + (ab_i - ab_0)^2/r_ab^2 + (rt_i - rt_0)^2/r_rt^2≤ 1, where (tm_0, ab_0, rt_0) is the center coordinate of the scar region. We consider different post-MI scenarios, including seven locations, two transmural extents, two different sizes, and two different cardiac electrical activity alterations. As fig:method:17AHA_MI_location shows, one can define the infarct areas consistently in the 17-segment American Heart Association (AHA) map <cit.>, enabling the study of the effects of MI properties at a population level. Note that in this study, we only consider the scars in the left ventricle (LV), as the majority of clinically significant myocardial scars present there <cit.>. The LV region is defined in Cobiveco as tv = 0 ∨ (tv = 1 ∧ rt > 2/3) to include the whole septum. For the comparison of different infarct sizes and cardiac electrical activity alterations, we only report on lateral MI as an illustrative case. As tb:method:MI scenario shows, we simulate infarct at seven different locations and one smaller size on lateral MI, each with two levels of transmural extent, and one scenario with a slower CV on transmural large lateral MI, resulting in a total of 17 post-MI scenarios for each subject. fig:method:MI_examples provides several examples of our experimental scenarios. §.§ Functional Twinning: Forward Electrophysiological Simulation At the functional twinning stage, we simulate cardiac electrophysiology via an efficient orthotropic Eikonal model <cit.>, which incorporates a human-based Purkinje system into the formulation of the activation times of root nodes (RN). The simulation is performed on the Cobiveco mesh, solving: { √(∇^T t𝒱^2 ∇ t) = 1, t(Γ_0) = pk(Γ_0)-min(pk(Γ_0)), . where 𝒱 are the orthogonal conduction velocities (CVs) of fibre, sheet (transmural), and sheet-normal directions, t is the time at which the activation wavefront reaches each point in the mesh, Γ_0 is the set of RN locations, and pk is a Purkinje-tree delay function from the His-bundle to every point. Therefore, the earliest activation time at the RNs is defined as their delay from the His-bundle through the Purkinje tree normalized by the earliest activation, such that the wavefront originates at t = 0 in one of the endocardial RNs. The QRS can be calculated from the activation time map (ATM) via a pseudo-ECG equation <cit.> for a 1D cable source with constant conductivity at a given electrode location (x',y',z'), as ϕ_e (x',y',z' ) = a^2 σ_i/4 σ_e∫ - ∇ V_m ·[ ∇1/r] dx dy dz , where V_m is the transmembrane potential, ∇ V_m is its spatial gradient, r is the Euclidean distance from a given point (x,y,z) to the electrode location, a is a constant that depends on the fiber radius, and σ_i and σ_e are the intracellular and extracellular conductivities, respectively. The pseudo-ECG method can efficiently generate normalized ECG signals without significant loss of morphological information compared to the bidomain simulation <cit.>. In modeling the effects of scars on the QRS, it is essential to consider the electrophysiological properties of the infarct regions, such as the slower CVs <cit.>, which can lead to changes in the timing and amplitude of the ECG waveform and thus manifest as changes in QRS. Therefore, we vary the CVs of infarct and healthy myocardial areas during QRS simulation (see Sec. <ref>). As Fig. <ref> shows, the ATM of MI patients presents slower electrical signal propagation compared to that of healthy ones, resulting in corresponding alteration in the simulated QRS morphology. §.§ Functional Twinning: Inverse Inference of Post-MI Properties fig:method:computation model provides an overview of the proposed deep computation model, consisting of a dual-branch variational autoencoder (VAE) and an inference model. The VAE captures both anatomical and electrophysiological features, while the inference model uses the latent space representation to predict scar and border zone location. fig:method:network depicts the network architecture. For the geometry reconstruction, we reconstruct coarse and dense point clouds (PCs) to simultaneously learn global shape and local anatomy of the ventricles. Therefore, the PC reconstruction loss function is defined as follows, ℒ^rec_PC = ∑_i=1^K(ℒ_i,coarse^chamfer + αℒ_i,dense^chamfer), where K is the number of classes, α is the weight term between the two PCs, and ℒ^chamfer is the chamfer distance between the input and reconstructed PCs. To improve the fidelity and resemblance of the reconstructed QR̂S to the original QRS, we minimize their mean-squared error (MSE) and dynamic time warping (DTW) distance <cit.>, ℒ^rec_QRS = ℒ_MSE(QRS, QR̂S) + ℒ_DTW(QRS, QR̂S). Finally, the loss function for training the VAE is calculated as, ℒ_VAE = λ_PCℒ^rec_PC + λ_QRSℒ^rec_QRS + λ_KLℒ^KL, where λ_PC, λ_QRS, and λ_KL are balancing parameters, and ℒ^KL is the Kullback-Leibler (KL) divergence loss to mitigate the distance between the prior and posterior distributions of the latent space. For the inference, we predict the infarct location based on the low-dimensional features learned from the VAE. To alleviate the class-imbalance issue existed in the MI segmentation, we combine the cross-entropy (CE) loss and Dice score loss, ℒ_seg = ℒ_CE + λ_Diceℒ_Dice, where λ_Dice is a balancing parameter. For realistic infarct shape, we further introduce a compactness loss, ℒ_compact = 1/N^pre∑_i=1^N^pred_i^pre + d_i^gd/d_max^gd, where N^pre is the total number of predicted MI points, d_i^pre and d_i^gd are the Euclidean distances from each predicted MI point i to the center of predicted and ground truth MI, respectively, and d_max^gd is the maximum Euclidean distance from ground truth MI points to their center. We introduce two further constraints, to control infarct size and prevent scar from appearing in the right ventricle (RV), through two additional loss functions: ℒ_size = N^pre-N^gd/N^gd, ℒ_spa = N^pre_RV/N^pre, where N^gd is the total number of ground truth infarct points, while N^pre_RV is the number of predicted infarct points located in the RV, excluding the septum boundary. Hence, the final inference loss is defined as, ℒ_inf = ℒ_seg + λ_compactℒ_compact + λ_sizeℒ_size + λ_spaℒ_spa + λ_VAEℒ_VAE, where λ_compact, λ_size and λ_spa are balancing parameters. § EXPERIMENTS AND RESULTS §.§ Materials §.§.§ Dataset and Simulation Setup We collected 49 subjects with paired 12-lead ECGs and multi-view cardiac MRIs from the UK Biobank study <cit.>. The dataset was randomly divided into 34 training subjects, 5 validation subjects, and 10 test subjects, and each subject has 17 post-MI scenarios. The biventricular tetrahedral mesh for each subject was converted into PCs and then resampled into coarse and dense versions with 1,024 and 4,096 nodes, respectively. On these meshes, we imposed simulated infarcts with different locations, sizes, transmural extents, and CV alterations. During the electrophysiology simulations, a fixed set of RN locations and CV values were utilized. Specifically, the RNs were placed at seven specific homologous locations based on Cobiveco – four in the LV and three in the RV. In the LV, they were situated in the mid-septum, basal-anterior paraseptal, and two mid-posterior locations, while in the RV, they were located in the mid-septum and two free wall regions <cit.>. Two sizes of lateral MI were achieved by halving r_ab and r_rt values for the small lateral MI compared to the large one. Two transmural extents were set by varying r_tm, which was set as 3 and 0.5 for transmural and subendocardial scars, respectively. For baseline QRS simulation, the CV values for different directions were set as follows: 65 cm/s along the fiber direction, 48 cm/s along the sheet direction, 51 cm/s along the sheet-normal direction, and 100 cm/s and 150 cm/s for the sparse and dense endocardial directions, respectively <cit.>. These values were consistent with reported velocities for healthy human myocardium in previous studies <cit.>. In the simulation of QRS for MI, the CVs in the areas of myocardial scarring and BZ were set to 10% and 50% (another slower CV configuration: 5% and 25%) of the respective values observed in healthy myocardium. §.§.§ Evaluation For evaluation, we compared the predicted MI distribution of our proposed automatic method with the gold standard set in the simulation phase. To evaluate the segmentation accuracy, we calculated the Dice score, precision, and recall of the MI prediction, calculated on the PCs. Furthermore, we propose a novel evaluation metric called the AHA-loc-score, to assess the accuracy of MI localization using the 17-segment AHA map, AHA-loc-score = β_c-idδ_c-pre, c-gd + β_idIoU_id + β_c-d(1-d_c), where δ_c-pre, c-gd indicates whether the AHA index of predicted infarct center is matched with that of ground truth, IoU_ids calculates the intersection over union (IoU) score of the AHA indices appeared in the predicted and ground truth MI regions, and d_c refers to the normalized distance between predicted and ground truth infarct centers. The weights β_c-id, β_ids, and β_c-d have values of 0.5, 0.2, and 0.3, respectively. §.§.§ Implementation The framework was implemented in PyTorch, running on a computer with 3.50 GHz Intel(R) Xeon(R) E-2146G CPU and an NVIDIA GeForce RTX 3060. We use the Adam optimizer to update the network parameters (weight decay = 1e-3). The batch size is 4, and the initial learning rate is set to 1e-4 with a stepped decay rate of 0.5 every 6800 iterations. The balancing parameters in Sec. <ref> are set as follows: α=5, λ_KL=0.01, λ_compact=1, λ_size=1, λ_spa=1, and λ_VAE=1. The simulation of one QRS of MI spent about 5 min. The training of the model took about 10 hours (300 epochs in total), while the inference of the networks required about 9 s to process one test case. §.§ Sensitivity Analysis of QRS for Different Post-MI Characteristics We performed a sensitivity analysis in which we studied the effects of different infarct configurations in the QRS complex. The aim was to find out which locations and sizes had a significant effect on QRS, and thus to establish the feasibility of the inverse inference task. To quantify discrepancy between QRS shapes, we employed a global measure, DTW, which compared signals of different lengths with an additional penalty for the difference in QRS duration between the two signals <cit.>. Furthermore, we introduced four QRS abnormalities reported in literature, i.e., QRS duration prolongation <cit.>, pathological Q-waves <cit.>, poor R wave progression (PRWP) <cit.>, and fragmented QRS (fQRS) <cit.>. The reader is referred to fig:exp:abnormalQRS_MI_example for illustration of each local QRS criteria of post-MI. QRS duration prolongation can occur due to the damage to the heart muscle and subsequent changes in electrical conduction of MI. Pathological Q waves are typically deeper, wider, and longer than normal Q waves, and are usually associated with the loss of electrical activity in the area of the heart affected by the MI. Specifically, it can be defined as the presence of Q wave with duration ≥ 0.03 s and/ or amplitude ≥ 25% of R-wave amplitude <cit.>. PRWP refers to the absence of the normal increase in amplitude of the R wave in the precordial leads when advancing from lead V1 to V6 <cit.>. In the literature, different definitions of PRWP exist <cit.>. Here, we utilize specific criteria, such as the R wave amplitude of 2 mm or less in the lead V3/V4 and the presence of reversed R-wave progression. This is determined when the R wave amplitude of V5 is less than that of V6 or the R wave amplitude of V2 is less than that of V1 or any combination of these. fQRS refers to the presence of multiple small spikes or notches within the QRS complex <cit.>. It is typically present in the lead corresponding to the location of the infarct zone. Note that although these QRS abnormalities have been shown to be useful in the diagnosis and prognosis of MI in some studies, there is also conflicting evidence and debate among researchers regarding their clinical significance and usefulness <cit.>. §.§.§ Sensitivity Analysis: Global QRS Measure To assess the impact of QRS on the 17 different MI scenarios, we measured the dissimilarity between each of these and the baseline, as well as the dissimilarity between them. As fig:exp:QRS_dissimilarity shows, the QRS complex showed morphological alterations in most post-MI scenarios when compared to the normal QRS complex. Particularly, inferolateral, extensive anterior, and apical transmural MI presented more evident alterations compared to others. One can see a significant decrease in QRS morphology alteration in small lateral MI when compared to that of large lateral MI, especially for subendocardial one. The orientation and location of the heart within the torso can affect the direction and amplitude of the electrical signals detected on the body surface, which can lead to variation in the QRS complex morphology among different individuals. Moreover, differences in the anatomy and physiology of the heart itself can also contribute to the variation in QRS morphology. In the case of lateral MI, the variation in the QRS complex may be more pronounced. This is because the electrical activity associated with ventricular depolarization needs to traverse a larger distance through the LV myocardium to reach the lateral wall, which can result in changes to the amplitude, duration, and morphology of the QRS complex. The degree of transmurality presented a noticeable impact on the QRS morphology at all infarct locations, namely transmural scars generally caused more prominent changes in QRS morphology compared to subendocardial scars. Although the QRS dissimilarities between transmural and subendocardial septal scars were relatively small (DTW^max=0.2 and DTW^avg=0.3), differences in QRS morphology can still be observed, as shown in fig:exp:simulated_QRS_examples. Despite the influence of transmurality on QRS morphology, the differences in QRS between various infarct locations seemed to be more pronounced than those caused by the extent of transmurality. This implies that the QRS has greater sensitivity in localizing MI rather than predicting its transmural extent. The primary QRS morphological difference observed with varying degrees of CV reduction was the QRS duration: 99.5 ms vs. 113.8 ms on transmural large lateral MI. However, our initial tests presented unexpected QRS simulation results when we significantly reduced the CVs in the MI regions. This suggests that the personalized CV configuration of infarct areas during simulation requires further investigation in the future. Most infarct locations were represented on the QRS by leads I, V5, and V6, whereas septal MI was represented by leads V1-V4 and V3-V4 for subendocardial and transmural ones, respectively. This result is in agreement with those reported in clinical practice <cit.>. Generally, larger scars tend to result in QRS changes appearing in more leads. The ability of various QRS leads to accurately detect the location of infarction varied. This is because the electrical activity of the heart is not uniform, and different leads may have a better view of certain regions of the heart. Additionally, the location of the infarct and its extent can influence the morphology of the QRS complex in different leads, which can affect their ability to detect the infarct location. §.§.§ Sensitivity Analysis: Local QRS Measure The changes in QRS morphology for the 17 MI scenarios were reflected in multiple ways. Here, we introduced several QRS criteria and compared the contribution of each of these for infarct detection. We found that apical and inferolateral MI tended to present prolongation of the QRS duration: 124.1 ms and 107.7 ms (apical and inferolateral MI) vs. 90.4 ms (normal). PRWP mainly occurred in extensive anterior, septal, and apical MI, similar as reported in the literature <cit.>. Specifically, the R wave amplitude in the septal MI was sometimes flattened, while the R wave of V6 tended to be larger than that of V5 in the apical MI, as fig:exp:simulated_QRS_examples shows. The prevalence of fQRS was more common in the inferior lead (lead II) compared with the anterior leads (leads V3 and V4) and the lateral leads (leads V5 and V6), similar to the results reported in Liu et al. <cit.>. The presence of fQRS in lead II and leads V3-V4 indicated inferolateral and extensive anterior MI, respectively. In contrast, pathological Q wave failed to classify MI from healthy subjects in our simulation system. §.§ Inference Accuracy of Post-MI Properties tb:results:MIinference presents the quantitative results of the proposed method, and fig:result:boxplot provides the boxplots of Dice score. The proposed method obtained the best segmentation and localization performance on the transmural extensive anterior MI (Dice= 0.934 ± 0.028, AHA-loc-score = 0.987 ± 0.007). Even for the scenarios where there were not notable QRS morphology changes, such as MI in the septum and limited anterior areas, the model still can localize the corresponding infarct (DTW^max=0.4, AHA-loc-score ≈ 0.7). Nevertheless, the model showed difficulties in detecting lateral (especially for the subendocardial and small size ones, with Dice score of 0.097 ± 0.112) and inferior MI with Dice scores of 0.228 ± 0.252 and 0.173 ± 0.288 for subendocardial and transmural one, respectively. In general, the segmentation of the transmural MI tended to be more accurate than that of the subendocardial MI (Dice: 0.518 ± 0.347 vs. 0.396 ± 0.271). This observation aligned with expectations, since transmural MI often exhibit more pronounced and distinct QRS abnormalities compared to subendocardial MI, as proved in previous sensitivity analysis. As a result, our model can leverage these noticeable differences to identify and segment the affected region accurately. Nevertheless, their ability to precisely determine the location of the infarction within the myocardium did not vary significantly (AHA-loc score: 0.610 ± 0.343 vs. 0.659 ± 0.339). This can be attributed to the fact that the localization of MI is not solely dependent on the depth or extent of the infarct. Furthermore, the accuracy of predicting scars was generally higher than that of predicting border zones (BZs). This could be because the complex nature of BZs, where the myocardial tissue undergoes a transition from healthy to scarred, introduces additional variability and ambiguity in the QRS signals, leading to a lower prediction accuracy for BZs. The performance in terms of Dice coefficient, precision, recall and AHA-loc-score was generally consistent. However, in specific cases like apical, limited anterior, and inferolateral transmural MI, precision may exhibit a slight superiority over the Dice. Apical MI obtained the highest AHA-loc-score, indicating its accurate and reliable localization. This could be attributed to the uniqueness of the apical location, allowing for a more precise and unambiguous localization of MI due to the absence of significant interference from neighboring structures. Figure <ref> provides 3D results of a representative test subject on different scenarios. One can observe that the 3D visualization agrees well with the quantitative analysis result. There were outliers appearing in the inferior area for lateral MI detection and vice versa, which suggests that the model had difficulty distinguishing between the lateral and inferior MI areas based on their QRS. Furthermore, even though extensive anterior and inferolateral MI both covered large areas, the detection of inferolateral MI tended to be more difficult compared to that of extensive anterior MI, which can be further proved in the correlation study of MI volume presented in fig:result:volume_regression. §.§ Ablation Study Accurate MI inference goes beyond merely identifying the location of the infarction, but also requires a comprehensive assessment of the extent of infarct tissue. Therefore, we introduced additional constrains, namely localization constrains (ℒ_spa and ℒ_comp) and an extent constrain (ℒ_size). To evaluate their effectiveness, we conducted an ablation study by selectively removing them from the proposed framework, as presented in tb:result:ablation_study. One can see that in most scenarios the proposed method obtained the best performance compared to others. For example, without localization constrains, the model presented worse performance in identifying septal MI. Note that septal MI normally presents complexity for detection, due to its unique position and overlapping ECG effects from neighboring regions, such as the anterior and inferior walls. We observed that the absence of ℒ_comp led to improved Dice in cases of inferolateral and subendocardial limited anterior MI and decreased Dice in cases of extensive anterior MI. Nevertheless, reduction in outliers observed in the visualization results suggests that ℒ_comp effectively minimizes the occurrence of outliers, leading to more reliable and accurate predictions. The extent constraint was also crucial, particularly in distinguishing between subendocardial and transmural MI that present different sizes in the same anatomical position. §.§ Extended Evaluation §.§.§ Exploring the Detection Limit of QRS for Small Infarct Areas To investigate what is the smallest infarct area that can be detected from QRS complexes, we employed apical MI as an example and varied the infarct size and retrained the model based on the pre-trained one. The idea behind this approach is to determine the sensitivity of QRS-based detection methods for small infarct areas, which may have important clinical implications for risk stratification and management of post-MI patients. Figures <ref> (a) and (c) demonstrate that as the infarct size decreased, the QRS morphological changes also diminished. This is because a smaller infarct would have a lesser impact on the overall electrical conduction and activation patterns of the heart. Consequently, the deviations in the QRS, which represent the depolarization of the ventricles, would be less pronounced. Nevertheless, our method still can extract subtle features from the QRS complex that may be indicative of small infarct areas, as fig:result:QRS_MIsize (b) shows. This ability was limited until when the Cobiveco apicobasal radius r_ab of scars equaled to 0.1 for apical MI. §.§.§ Correlation Analysis: Relationship between ECG/ PC Reconstruction and MI Inference Accuracy To evaluate the robustness of the proposed inference scheme to the reconstruction error, we analyzed the relationship between the reconstruction and inference errors by the proposed method. The accuracy of PC and ECG reconstruction was calculated as 0.5*ℒ^rec_PC with α=1 and ℒ^rec_QRS, respectively. The r^2 values of scar/ BZ for PC and ECG-MI inference correlations were 0.002/ 0.006 and 0.008/ 0.009, respectively, indicating no relationship between inference and reconstruction accuracy. This implies that the accuracy of MI inference using the proposed method was not significantly influenced by the quality of the reconstruction. This is reasonable, as the proposed method focuses on extracting relevant features from the input data rather than relying solely on accurate reconstruction for MI inference. Nevertheless, the reconstructions are still necessary as they provide valuable information for the inference. To demonstrate this, we conducted a comparison by removing the reconstruction steps, and the results noticeably decreased (AHA-loc scores: 0.610 ± 0.343 vs. 0.561 ± 0.338 for subendocardial MI, and 0.659 ± 0.339 vs. 0.585 ± 0.367 for transmural MI), highlighting the significance of incorporating reconstruction in the inverse inference. §.§.§ Comparison with Conventional MI Inference Method To demonstrate the efficacy of our approach, we conducted a comparative analysis with the Selvester QRS scoring system <cit.>. The score criteria have been employed to identify scar location based on QRS phenotypes, such as wave duration (Q or R), wave amplitude (R or S), amplitude ratio (R/Q, R/S, R/R^', or S/S^'), and QRS slurs or notches <cit.>. ... § DISCUSSION AND CONCLUSION In this paper, we have developed a deep computational model to tackle the inverse problem in cardiac electrophysiology, i.e., inferring MI distribution from QRS signals. Through the integration of anatomical and electrophysiological data, we achieve a comprehensive analysis that incorporates different infarct locations, sizes, transmural extents, and cardiac electrical activity alterations. By consistently representing the ventricular anatomy in a coordinate reference system, we establish a robust sensitivity analysis framework for studying the association between infarct characteristics and QRS abnormalities. The sensitivity analysis results have demonstrated significant morphological alterations in the QRS complex for various post-MI scenarios, particularly inferolateral, extensive anterior, and apical MI. These findings suggest that the involvement of large areas of damaged heart muscle leads to pronounced changes in QRS morphology. Furthermore, the analysis emphasizes the impact of transmurality on QRS morphology, namely transmural MI presents more prominent changes compared to subendocardial MI. However, the differences in QRS between various infarct locations can be more pronounced than those caused by the extent of transmurality, indicating the greater sensitivity of QRS in localizing MI rather than predicting its transmural extent. The analysis further highlight the importance of lead selection in accurately detecting the location of infarction. Overall, the sensitivity analysis provides valuable insights into the relationship between infarct characteristics and QRS abnormalities, enhancing our understanding of the complex interplay between infarct characteristics and electrophysiological features. The proposed method can effectively segment and localize MI, even in scenarios with limited QRS morphology changes, demonstrating its feasibility of developing CDTs for MI patients. The results of the ablation study emphasize the importance of the localization and extent constraints in accurate MI inference. The proposed method exhibits the ability to detect small infarct areas, although its sensitivity is limited, as proved in our extended study. The correlation analysis demonstrates that while incorporating reconstruction in the inference process is important, the accuracy of MI inference is not significantly dependent on the quality of reconstruction. To conduct a sensitivity analysis of MI properties, we intentionally select consistent infarct location, size and transmural extent for each subject. While it ensures a controlled comparison, it may have led to a limited evaluation of MI inference. We conduct a small test by randomly selecting infarct for each subject and only obtain reasonable good results on few cases. This outcome is expected because randomly simulating a single scenario for each subject limits ability of the proposed model to learn and generalize across different infarct characteristics. In order to improve performance, in the future a more diverse and comprehensive dataset with a wider range of infarct scenarios should be used to train the model. Note that this work is an initial study, and there are several limitations that need to be acknowledged. Firstly, this study assumes a known set of RNs and fixed CVs for all subjects, which may not fully capture the complexity and heterogeneity present in real-world healthcare data. Therefore, further research is needed to personalize these activation properties based on individual patient characteristics and specific healthcare settings. Secondly, we only consider cardiac anatomical information and electrode nodes while disregarding the full torso geometry. The inclusion of torso geometry could provide valuable insights into its influence on QRS patterns. By incorporating full torso geometry in our future work, we can gain a more comprehensive understanding of the factors influencing QRS patterns and improve the accuracy of our predictions and interpretations. Thirdly, this study focuses solely on the QRS complex, rather than considering the entire ECG signal. Applying the analysis to the whole ECG signal would provide a more comprehensive assessment but may require significant computational resources. To address this limitation, future research could explore computationally efficient surrogate to replace the expensive simulation model. Finally, while the developed CDTs can provide valuable insights into the mechanisms of MI, they are based on simplified assumptions about the heart and may not capture all aspects of the complex interactions between cardiac structures and functions. Given the limitations, particularly in the simulated dataset used, this can only serve as a proof of concept until validation on the clinical data can be performed. ieeetr
http://arxiv.org/abs/2307.04425v1
20230710090012
Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning
[ "Arunkumar Govindarajan", "Arjun Agarwal", "Subhankar Chattoraj", "Dennis Robert", "Satish Golla", "Ujjwal Upadhyay", "Swetha Tanamala", "Aarthi Govindarajan" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Article Title]Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning [1]Arunkumar [email protected] 2]Arjun [email protected] [2]Subhankar [email protected] 2]Dennis [email protected] 2]Satish [email protected] 2]Ujjwal [email protected] 2]Swetha [email protected] 1]Aarthi [email protected] *[1]Aarthi Scans & Labs, Chennai, Tamil Nadu, India [2]Qure.ai, Mumbai, Maharashtra, India Head Non-contrast computed tomography (NCCT) scan remain the preferred primary imaging modality due to their widespread availability and speed. However, the current standard for manual annotations of abnormal brain tissue on head-NCCT scans involves significant disadvantages like lack of cutoff standardization and degeneration identification. The recent advancement of deep learning-based computer-aided diagnostic (CAD) models in the multidisciplinary domain has created vast opportunities in neurological medical imaging. Significant literature has been published earlier in the automated identification of brain tissue on different imaging modalities. However, determining Intracranial hemorrhage (ICH) and infarct can be challenging due to image texture, volume size, and scan quality variability. This retrospective validation study evaluated a DL-based algorithm identifying Intracranial hemorrhage (ICH) and infarct from head-NCCT scans. The head-NCCT scans dataset was collected consecutively from multiple diagnostic imaging centers across India. The study exhibits the potential and limitations of such DL-based software for introduction in routine workflow in extensive healthcare facilities. [ [ August 12, 2023 =================== § INTRODUCTION In Cognitive Neuroscience, Neuropsychological investigation of stroke patients is widely utilized in advancing our knowledge of brain functions. The considerable insight into the relation of the brain function to its anatomy has been determined via correlation analysis between physical brain damage and impaired behavior <cit.><cit.><cit.>. The stroke topology can be broadly classified into two types: 1) Intracranial hemorrhage (ICH), the rupture blood vessel within the brain which causes bleeding. The common factors related to the cause of ICH are advanced age, heavy alcohol usage, and high blood pressure (hypertension) <cit.>. As per some recent studies, although ICH accounts for 10–15% of all stroke-related deaths, over the last thirty years, the mortality and morbidity have not changed, particularly in developing countries <cit.>. 2) Ischemic stroke or infarct, is interruption of blood flow due to blood clot. Infarct is generally caused by the buildup of plaques (atherosclerosis) over time in the arteries. Globally, over 13.7 million individuals have a stroke each year, of which approximately 70%, i.e., 9.5 million, are infarct <cit.>. Presently, mapping of the stroke lesion is regularly done using Computed tomography (CT) and magnetic resonance imaging (MRI). The MR (T1- weighted and T2- weighted) anatomical images are acquired as a part of routine practice for stroke patients. In stroke suspected patients with negative CT scans, MRI can also be performed. After the first few hours of onset, the ischemic stroke can be identified using the MRI. Additionally, the differentiation of irreparably damaged brain tissue and the tissue at risk due to infraction can be diagnosed using the MRI. However, CT is the preferred imaging modality over MRI in acute stroke care units and clinical trials due to the reduced exclusion criteria compared to MRI, affordability, speed, and accessibility <cit.>. In CT, hemorrhage is percieved as the bright region (hyper-dense) exhibiting sharp contrast and infarct as dark region (hypo-dense) depending on the time progressed after the onset. The manual annotations of abnormal brain tissue by trained neuroradiologists is currently the present standard method for lesion identification <cit.>. However, the manual annotations of abnormal brain tissue have many disadvantages <cit.>. 1) Lack of cutoff standardization:, There is no standard protocol for explicit cutoff, particularly around the ventricles, to differentiate lesioned and non-lesioned tissues; as a result, this approach produces large variability and lacks reproducibility across operators. 2) Degeneration identification: The stroke-induced degeneration occurring in chronic stroke patients outside the lesion is not captured in the standard manual annotations process, even though a significant clinical impact on patients is caused due to the stroke-induced degeneration. The recent advancement of deep learning based computer aided diagnostic (CAD) models in medical imaging and signal processing can significantly assist in overcoming the existing challenges <cit.><cit.><cit.><cit.><cit.>. In addition, the manual editing combined with an automated detection solution of hypo- or hyper-dense regions that remains under operator supervision and can assist in overcoming the present challenges <cit.>. More recently, a study using large CT datasets to remove the inter-subject variability in brain lesion characterization using an automated approach was proposed <cit.>. Several state-of-the-art algorithms have been proposed for lesion segmentation in MR images over the past few years, but very few have been developed to address stroke lesions on CT scans. Most of the earlier work published to validate automated solutions was directed toward identifying ICH. As the ICH appears bright in CT scans, developing an automated solution based on supervised or unsupervised learning algorithm or extracting morphological features from labeled images to differentiate between true lesioned and non-lesioned tissues is less challenging <cit.> <cit.>. Infarct identification, on the other hand, is a less popular problem statement among researchers compared to ICH detection due to its challenging nature. To address this issue very recently, a rule-based approach based on seeded region-growing algorithms was proposed via extracting hand-crafted features such as relative position for an axis of symmetry, texture, and brightness <cit.>. However, the primary disadvantage of this study is that the seeded region-growing algorithms may not be able to define the boundaries of the stroke region distinctively. In this study, we have evaluated an Artificial Intelligence (AI) based automated CAD algorithm based on deep learning, capable of identifying ICH and infarct on Head-Non-contrast Computed Tomography (Head-NCCT) scan. The solution has been earlier validated on detecting ICH on Head-NCCT scan images <cit.>. The Institutional Review Board (IRB) has approved the proposed retrospective study. We demonstrated the effectiveness and validity of the automated CAD solution in detecting ICH infarct and quantifying infarct on Head-NCCT scan. Our proposed validation will provide a rapid and efficient tool for both research and clinical application. It will assist in the broader adaptation of automated CAD solutions at extensive clinical facilities. § MATERIAL AND METHODS The study was a HIPAA-compliant retrospective study with Institutional Review Approval (IRB) from Royal Pune Independent Ethics Committee (RPIEC) (IRB No. RPIEC240123). Informed consent was obtained from all participants. All methods were carried out in accordance with relevant guidelines and regulations. The primary objective was to evaluate the commercially available deep learning-based algorithm qER (Qure.ai Technologies, Mumbai, India) in terms of Area Under the Receiver Operating Characteristics Curve (AUC) in triaging Head-NCCT scan in detection and quantification of infarcts. It was estimated that a minimum sample of 418 Head-NCCT scans (167 Head-NCCT scans image with radiologist-confirmed infarcts, 251 Head-NCCT scans images without infarcts, 2:3 ratio) would provide a minimum of 80% power to estimate an anticipated AUC of 80% with 7% precision assuming a Type I error rate of 5% <cit.><cit.>. The Head-NCCT scans, and their signed-off original radiological report performed from 01-September-2021 to 31-August-2022 were acquired from diagnostic imaging centers across India. A total of 1878 Head-NCCT scan were collected. The original radiological report of these scans was subjected to a manual review by a clinical data abstractor to classify the scans into infarct, and non-infarct reported scans based on the original radiological report. A stratified random sample of 500 Head-NCCT scans stratified by the presence and absence of infarct (based on the original radiological reports) were then selected for independent ground truthing by a radiologist with more than fourteen years of experience. The inclusion criteria were Head-NCCT scans with soft reconstruction kernel covering the complete brain, slice thickness ≤ 6mm. The exclusion criteria were Head-NCCT scans with obvious postoperative defects or from patients who had previously undergone brain surgery, Head-NCCT scans with artifacts such as burr holes, shunts or clips, Head-NCCT scans containing metal artifacts, excessive motion artifacts, Head-NCCT scans containing missing and improperly ordered slices. The ground truther radiologist had access to the original head NCCT scan image but was blinded to the original radiology report. The ground truther reviewed all the Head-NCCT scans and provided segmentation boundaries for infarcts and intracranial hemorrhages. The ground truther radiologist also provided a binary response for the presence or absence of cranial fracture, midline shift, and mass effect. The ground truth output was the reference standard for all downstream statistical analyses, not the original radiological report. The sensitivity and specificity were estimated based on a default device threshold (available from the manufacturer based on internal testing), and the optimum threshold was based on Youden's index. The 95% confidence intervals for sensitivity and specificity are reported based on exact method <cit.>. AUC and 95% confidence interval (CI) was estimated based on the empirical method and De Long methodology, respectively <cit.>. The segmentation provided by the ground truther radiologist was utilized for the quantification analysis of the error in the predicted infarct volume by the DL-based algorithm. Absolute errors in infarct volume estimation in milliliter (mL), and summary statistics of absolute errors were reported. The statistical analyses were performed using RStudio (RStudio version 2022.07.1, R version 4.2.1) and Python version 3.9.7. § EXPERIMENTAL RESULTS §.§ Identification of ICH and Infarct The ground truthing was completed for 428, while 22 Head-NCCT scan were excluded due to the inclusion and exclusion criteria mentioned in section <ref>. A total of 187 Head-NCCT scan confirmed (based on ground truth) the presence, while 241 Head-NCCT scan confirmed the absence of any infarcts. This distribution of scans with and without infarcts met the minimum sample size requirements described earlier in <ref>. In addition, 21 scans with intracranial hemorrhages (ICH) and 23 scans with cranial fractures were present in the sample. A total of 212 (49.5%) of the 428 Head-NCCT scans did not contain any infarcts, intracranial hemorrhages, cranial fracture, midline shift, or mass effect. The distribution of the Head-NCCT scans is shown in Table. <ref>. It can be observed from Table. <ref> that the DL-based algorithm achieved an AUC of 86.8% (95% CI: 83.4 - 90.2) in detecting scans with the presence of infarcts while the sensitivity and specificity were estimated to be 66.8% (95% CI: 59.6-73.5)and 86.7% (95% CI: 81.8-90.7) respectively at the default threshold. The optimum operating threshold was determined using Youden’s index. At this optimum threshold, it was observed that the sensitivity of the DL-based algorithm improved to 80.2% (95% CI: 73.8 - 85.7) without substantial reduction in specificity 80.1% (95% CI: 74.5 - 84.9). For ICH, an AUC of 94.8% (95% CI: 87.4 - 100) was achieved. There was no change in sensitivity compared to the default and optimum threshold, while the specificity increased by 3% using the optimum threshold. In contrast, the sensitivity of cranial fracture compared to the default and optimum threshold, an enhancement of 15.8% was observed while the specificity decreased by 2.7%. In Fig. <ref>, the AUC-ROC plot for Cranial Fracture, ICH, and Infarct is given. §.§ Quantification of Infarct Volume The DL-based algorithm for identifying infarcts produces the infarct volume in mL. A total of 150 true positive scans for which both DL-based algorithms predicted volume and ground truth volume were available for this analysis. The reference standard was radiologist annotations done for each Head-NCCT scan images. The mean absolute error (MAE) was 4.7 mL for overall scans. Based on ground truth volume, the scans were further divided into two categories - scans with 0 - 5 mL and > 5 mL infarcts volume, respectively. It can be observed from Table. <ref> that the MAE for 0 - 5 mL and > 5 mL scans were found to be 3.2 mL and 8.557 mL, respectively. In Fig. <ref> from the scatter plot of infarct volumes (1), it can be observed that with an increase in infarct volume, there is a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The Bland-Altman plots showing good agreement between the ground truther annotation and predicted volume by the DL-based algorithm are shown in Fig. <ref> (2). §.§ Visual Explanations of DL-based Algorithm The experimental findings depict that the evaluated DL-based algorithm achieved superior performance represented in Table. <ref> and <ref>. In most DL-based models the rationale behind the prediction is not reveled explicitly. Since these DL black box models can not be decomposed into intuitive and comprehensive modules, these models are hard to interpret. Consequently, the end-users develop skepticism and find the model difficult to trust. The emergence of explainable artificial intelligence (XAI) is an essential aspect of model transparency and the social right to explain DL inferences <cit.>,<cit.>. XAI encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into our everyday lives <cit.>. The present evaluated DL-based algorithm outputs a boundary across the infarcts which revels rationale behind the superior performance. In Fig. <ref> is can be observed that for both small and large infarcts volume on Head-NCCT scan, the model predicted boundary clearly overlaps with the ground truther boundary. § DISCUSSION This retrospective study evaluated a deep learning algorithm for detecting infarcts in Head-NCCT scans. The algorithm had a good AUC of about 86% in detecting infarcts. After adjusting for thresholds, a balanced sensitivity of 80.2% and specificity of 80.1% was estimated to detect infarcts. The algorithm's sensitivity in detecting infarcts in scans with no other target abnormalities was found to be 80% (136 correctly detected out of 170). It did not differ from the overall sensitivity at optimum sensitivity. This states the robustness of the DL-based algorithm to identify infarcts with negligible drop in sensitivity with presence of other abnormalities. Additionally, it is to be noted that the sensitivity of Head-NCCT scans in detecting infarcts is generally considered low, especially in the case of hyperacute and acute ischemic strokes. In one study, the sensitivity of detecting acute ischemic stroke on head NCCT scans ranged from 57% to 71% with considerable inter-reader variability <cit.><cit.>. Additionally, we evaluated the performance to detect ICH and cranial fracture, and both had excellent AUC. However, the interpretation is limited by low sample sizes for these two abnormalities. Our results also show that threshold adjustments might be needed before using such algorithms routinely for clinical decision support. Deep learning or big data are often called "black box" and represent substantial obstacles in introducing intuitive and comprehensive modules into actual clinical practice; these models are challenging to interpret. However, the DL-based method validated in this study provides a post-hoc attention tool for the clinician to identify the lesion visually. In addition, the DL-based algorithm validated in this study encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into routine clinical practice. Moreover, the proposed validation of the DL-based algorithm will be beneficial in the resource constraint areas with a limited number of radiologists or with only access to teleradiology facilities. Our study has limitations. First, the differentiation of infarct into acute and chronic infarct was not analyzed. Second, the ground truthing for the head NCCT scans images with the presence of infarcts was done by a single radiologist. Thirdly, there were not enough scans for the ICH and cranial fracture to estimate performance metrics with sufficient precision. § CONCLUSION The present study evaluated a DL-based algorithm to determine the presence and absence of ICH and infarcts on head-NCCT scans. The DL-based algorithm demonstrated high detection performance rate in identifying infarcts, ICH, and cranial fracture. Additionally, the DL-based algorithm exhibits a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The study demonstrated the performance of ICH detection and infarcts detection and quantification to indicate the feasibility of introduction of such DL-algorithms in routine workflow in extensive healthcare facilities. § DATA AVAILABILITY The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
http://arxiv.org/abs/2307.06066v1
20230712103527
Security in Online Freelance Software Development: A case for Distributed Security Responsibility
[ "Irum Rauf", "Tamara Lopez", "Thein Tun", "Marian Petre", "Bashar Nuseibeh" ]
cs.CR
[ "cs.CR", "cs.CY" ]
Security in Online Freelance Software Development: A case for Distributed Security Responsibility Irum Rauf, Tamara Lopez, Thein Tun, Marian Petre The Open University, Milton Keynes, UK [email protected] Bashar Nuseibeh The Open University, UK Lero, Republic of Ireland [email protected] August 12, 2023 ========================================================================================================================================================================================================================================= Secure software is a cornerstone to safe and resilient digital ecosystems. It offers strong foundation to protect users' sensitive data and guard against cyber-threats. The rapidly increasing landscape of digital economy has encouraged developers from different socio-technical and socio-economic backgrounds to join online freelance marketplaces. While, secure software practices facilitate software developers in developing secure software, there is paucity of research on how freelance developers adhere to security practices and how they can be facilitated to improve their security behavior in under-resourced environments. Moreover, freelance developers are often held responsible for producing insecure code. In this position paper, we review existing literature and argue for the case of distributed security responsibilities in online freelance environment. We propose a research agenda aimed at offering an organized and systematic effort by researchers to address security needs and challenges of online freelance marketplaces. These include: characterising software security and defining separation of responsibilities, building trust in online freelance development communities, leveraging the potential of online freelancing platforms in the promotion of secure software development and building adaptive security interventions for online freelance software development. The research has the potential to bring forth existing security solutions to wider developer community and deliver substantial benefits to the broader security ecosystem. freelance software development, security, developer, social insert § INTRODUCTION Online freelance marketplaces offer advanced systems for remote collaboration, connecting self-employed workers (freelancers) with clients (individuals, small businesses, and large corporations) across the globe <cit.>. The reported figures by major freelancing platforms suggest that the scale of the global online labor is huge. Being one of the prominent freelance platforms, Upwork reported that more than 145 thousand clients spend over $ 2.5 billion per year, indicating the platform has significant number of users <cit.>. Pre-COVID studies estimated that the demand for online freelancing platforms grew by approximately 21 percent from May 2016 to January 2018 with highest demand for software development and technology skills <cit.>. COVID-19 has catalysed remote work and the situation looks irreversible with more and more of the workforce adopting remote working model <cit.>. Secure software development is an integral part of software development in today's digitized world with constant security threats looming over businesses and daily lives of individuals. While developer-centered security <cit.> has received much attention in the last decade <cit.>, security in freelance software development has received little attention. Below, we highlight the need to investigate the security practices among freelance developers and to motivate the need to provide support to this cohort to develop secure software. §.§ Motivation to study Freelance Software Developers for secure software development §.§.§ Existing studies on freelance software developers focus on insecure outcome Existing work on security behavior of freelance developers <cit.>, <cit.> and on understanding security in the freelance development ecosystem <cit.> notes that freelance software developers produce more insecure code and holds them accountable for it <cit.>. However, recent studies (<cit.>,<cit.>) attempt to understand why freelance developers produce (more) insecure code. The work of Ryan et al. <cit.> investigates levels of secure coding practices for developers who are under-represented in literature, i.e. isolated developers, open source developers, freelancers and small organisations. They investigate how these cohorts adhere to common security practices. Their empirical findings reveal that these security practices are resource intensive and highlight the need to target small and under-resourced software development communities with tailored software security advice. The work of Rauf et al. <cit.> suggests that online freelance software development has unique marketplace dynamics that can lead to security compromises. Their work emphasizes the need for tailored security interventions to support freelance software developers working within platforms. §.§.§ Freelance developers can be serious and educated developers The need for offering support to freelance developers to improve their security behavior is exacerbated by the fact that freelance work-model is increasingly being adopted as a serious career - as an alternative to company employment. The Stack Overflow survey <cit.> reports that nearly 15% of developers that they surveyed are independent contractors, freelancers, or self-employed, making online freelance software development(OFSD) a significant part of the software industry. A recent industry report shows that non-temporary freelancers are growing, with 44% of freelancers saying that they earn more from freelancing than with a traditional job in 2021 <cit.>. Moreover, the prevalence of freelancing is increasing among individuals with higher levels of education, while it is declining among those with lower levels of education <cit.>. Similar findings were reported in prior work: an empirical study with freelance developers found that more than 50% participants had post-graduate education and learnt software development through formal education <cit.>. The study also reported that 90% of interviewed freelance developers could be characterized as serious developers who earned regular income from freelancing as full-time or part-time career. These findings about freelance developers from both industry and academia underline the significance of this growing demographic of developers the needs of which should be catered to. §.§.§ Software developed by freelance developers have consequential effects Freelance developers are perceived as being non-serious developers who are unreliable <cit.> producing low-quality outputs and showing a lack of commitment around security issues <cit.>. This perception may be grounded on the fact that online freelance marketplaces are open to all kinds of developers - those who know their work well and those who do not. While there are many non-serious developers, online freelancing platforms also host a huge number of serious developers who do a decent job. This is suggested by the fact that clients increasingly hire from these freelancing platforms and pay them <cit.>. Rauf et al. <cit.> reported that freelance developers do non-trivial jobs, i.e. most of their study participants worked on projects that were customer facing, such a mobile apps, web development, commercial products. Moreover, in today's world of digital enhancements, software products increasingly depend on one-another within the software supply chain - - and within which, each job performed forms a significant link. A clear instance of this is Log4Shell (CVE-2021-44228), a vulnerability found in Log4j, a widely used open-source Java logging tool. This particular flaw was publicly revealed in the latter part of 2021 and was quickly exploited by malicious individuals. By the end of 2022, there were reports indicating that North Korea had utilized this vulnerability to gain initial access to the networks of American energy companies <cit.>. This indicates that software products developed by freelance developers have far reaching effects. §.§.§ Widespread adoption of easy to use application development frameworks Developing software is no longer the domain of the select few with deep technical skills, training and knowledge. A wide range of people from diverse backgrounds are developing software for smart phones, websites and IoT devices used by millions of people. The rise of easy-to-use development frameworks, such as WordPress have encouraged people from non-technical backgrounds to develop applications that are used by a number of users. To take an example, in an earlier study with freelance software developers<cit.>, participants without a programming background reported that they used WordPress because it offered an easy-to-use interface. However, such frameworks are well-known to attackers for their vulnerabilities <cit.> - a risk that was perhaps unknown to the clients of freelance developers and of no concern to online freelancing platforms that are only tasked with facilitating transactions. In this position paper, motivated by the reasons above, we outline a case for identifying roles and responsibilities in online freelance software development and propose a’call-for-action’ to stakeholders of freelancing platforms to facilitate secure software development practices for this cohort of developers. We consider it an important step to tackle challenges to writing secure code in online freelance software development platforms that will only magnify with time. Moreover, we see a global presence of developers from different walks of life and different parts of the world. By better leveraging the potential of these freelance developers through tailored security interventions, we can offer developers working in these platforms opportunities to polish their skills and advance their careers by increasing their ability to address vital issues in software engineering in a responsible manner. Moreover, the software development industry can share the benefits of a skilled workforce that is globally available on the online freelancing platforms, countering the fast growing need for developers in today's digital economy. § DISTRIBUTED RESPONSIBILITY FOR SECURITY IN FREELANCE SOFTWARE DEVELOPMENT Responsibility in its general sense is often “concerned with having to answer why one acted as one did” <cit.>. This often becomes debatable when questioning whether the question is addressed to the right person or not, whether one actually took an action (or not), or whether the question was characterized correctly or not <cit.>. Nonetheless, responsibility is an important concept that helps in holding someone accountable for a task that was not performed or not done as it should be. The responsibility of security for freelance development is an under-explored area. The work of Ahmed and van den Hoven <cit.> consider freelance developers as agents of responsibility in web application development. In the light of existing theories on moral responsibilities of software developers <cit.> and ethics in information technology <cit.><cit.>, their work identifies freelance web developers as “liable, accountable, blamable, and causally responsible for their work.” (p.423, <cit.>). The work further concludes that “ Freelance web developers are answerable for the possible negative consequences of their actions and omissions.” (p.423, <cit.>). Such viewpoints are exacerbated by empirical studies conducted with freelancers software developers which report that freelance developers lack responsibility <cit.> and do not attend to security <cit.>. We find such analysis in line with the sentiment that the developer is the enemy <cit.>. Conversely, aligning with the counterview that the developer is not the enemy <cit.>, our work shows that freelance developers are not the sole agents of responsibility for secure code. We argue that the responsibility of security in freelance development is better characterized as a problem of many hands [The term problem of many hands is taken from the work of Noorman <cit.>, wherein, it is discussed as a general issue of determining responsibility in the computing discipline where many parties are involved in the supply chain from developers to end user. In this paper, we discuss the problem of many hands in the context of security in freelance development.], i.e. it becomes difficult to determine who is responsible for security since multiple entities contribute to the project's security outcome in freelance development making it easy to assign blame to someone else for not handling security. This case of assigning blame to other parties is also reported in earlier work <cit.>, wherein some freelance developers consider secure coding responsibility of developer while others consider it responsibility of the client who has to pay for extra effort. Below, we outline key stakeholders in online freelance software development and unpick the subtleties of responsibilities of these stakeholders. §.§ Freelance Developer - Responsibilities and Challenges The responsibility of freelance software developers in producing secure software is an important one as they use their skills and knowledge to develop applications which have direct or indirect impact on different parts of the society <cit.>. They take on the contract and develop software by writing code and/or designing it [In some scenarios, the developer hired to do the project, may hire other developers to do the task of code development <cit.>]. In order to hold someone accountable for a job, it is important that the one being questioned has control over his/ her action <cit.>. Research <cit.> suggests that freelance developers are not oblivious to their responsibility and try to find a work around where they are challenged. However, freelance developers are often constrained in their jobs by different socio-technical factors, such as multivalent nature of security, relationships with client, algorithms of freelancing platforms, and choice of different development frameworks. Below we discuss these briefly. In order to hold someone accountable for a job, it is important to ask the right question, i.e. is the question characterised correctly or not? <cit.>. Freelance software developers are held responsible for doing security <cit.>. However, security vulnerabilities can be of varying nature. It “can be a lacking security requirement (e.g. lack of, or improper authentication, encryption, ...), or a development error in the software (e.g. buffer overflow, race condition, ...).” (p.93, <cit.>). Some security requirements are well know and hence many developers consider them as basic security, e.g. authentication, password hashing and encryption of sensitive information <cit.>.Due to multivalent nature of security <cit.> and diverse skill set of freelance developers <cit.>, participants have different perceptions of security <cit.>. Different perceptions on basic security result in false perceptions in developers that they are handling (or not handling) security <cit.>. Software security researchers need to explicitly define tangible characteristics of security that developers should adhere to. Some freelancers opt for popular development frameworks because they are easy to use not requiring expert programming skills <cit.>. Some of these frameworks maybe insecure <cit.> but offer (paid) secure plugins. However, clients are not always willing to pay <cit.>. Moreover, some freelancers find it hard to stay updated with various security plugins of such frameworks <cit.>. Here again we notice that freelancers are aware of shortcomings of the frameworks they use with some switching to another development framework and others tend to hide URLS in an effort to avoid attention of the attackers <cit.>. Other freelancers, who heavily rely on development frameworks tend to stay updated with their frameworks as they do not have time to stay updated with changes in security landscape in general <cit.>. Moreover, empirical studies <cit.> suggest that freelance developers consider it responsibility of freelancers to initiate discussion on security with the client and inform about any security issues to non-technical clients in particular. However, developers find it difficult to discuss security issues with non-technical clients who think freelancers are finding ways to make extra money. Henceforth, some freelancers try to work with only technical clients who understand the technicalities of software projects, or they work around by developing long- term working relationship with their client to infuse trust in their relationships. Nonetheless, FL developers who are new to online platforms, struggle to select right clients. Algorithms in online platform provide greater visibility to developers who have done more projects and have good rating from clients. Thus, these freelancers may have to compromise on security in order to complete a reasonable number of projects with clients who don't take security seriously.Only when they have a stronger profile, they are in a better position to select clients who understand technical requirements of the project and give extra time and money for secure development. A recent study by Munoz et al. <cit.> offer similar insights on “how online freelancer’s identity presentation is constrained by the structuring of their profile, the ratings and client feedback, the algorithms used by the digital platform, and platform’s terms of use” (p.1). The study reports that freelance workers realize how these platforms control their identity are resist their deconstructed identity by the online platforms. Lack of adoption of common security practices in this cohort of developers is also a challenge <cit.>. Earlier study with freelance web application developers <cit.> showed that many freelancers are unaware of OWASP top 10 list of web application vulnerabilities <cit.> and more recent study <cit.> showed that the use of automated security tools is very low in freelance developers which can be of most benefit to under-resourced developers. §.§ Client - Responsibilities and Challenges Clients are an important stakeholder of freelance development as they hire a freelance developer and pay for the project. In this section, we outline the responsibilities of the clients to encourage them to take a responsible role in freelance development. In the presence of explicit security requirements, (freelance) developers tend to produce secure software as they are primed to think of security <cit.> and also the software product can be validated against security requirements <cit.>. However, clients may not always have a technical background and security may not be on the top of their head. In such scenarios, clients find it difficult to trust freelance developers who ask for extra money for secure development <cit.>. Studies report that trust as an important factor in the client-freelancer relationship influences how security is handled in freelance software projects <cit.>. Moreover, clients are often advised to hire good developers if they want a secure product, which is often translated to hiring expensive developers but is not always the case <cit.><cit.>. It is important to help clients, who are interested in developing good quality secure software to find the right talent in freelance online software development market. While the clients are willing to pay extra, Rauf et al. <cit.> report that perceptions on payment for security vary. While some freelance developers charge extra for secure development, others may not and still do secure coding considering it part of development. Furthermore, the current feedback mechanism in freelancing platforms rank freelancers on positive feedback from clients mainly on meeting project timeline and good communication. This makes it challenging for clients, especially the non-technical client, to hire the right developer. Moreover, technology naive clients find it difficult to have meaningful conversations with the freelance developers which results in compromise on security. Some freelancers avoid security because clients never ask for it <cit.>. Clients should explicitly discuss developers' security perceptions to ensure security is address in their projects. Additionally, responsibilities in teams are often not explicitly defined in remote teams <cit.>. This exacerbates the problem of many hands with freelance developers often holding someone else responsible for security in the project. These challenges require that clients should be facilitated with security interventions to raise their awareness of insecure software and understand the business case of security in software. Additionally, platforms should provide easy to understand security information to clients to have a meaningful conversations with freelance developers. Clients should also make it explicit to freelance developers working in online teams if there is someone else responsible for security. §.§ Freelancing Platforms - Responsibilities and Challenges The role of freelancing platforms is an important one as they have the capacity to influence work performance of freelancers <cit.>. Although moral responsibilities have in general revolved around the role of humans, with the prevalence of technology, the human activities cannot be fully understood without a reference to technological artifacts <cit.>. The online freelancing platforms which are actively used by developers around the world are sociotechnical systems. Based on the work of Bijker et al. <cit.>, sociotechnical systems are defined by Noorman <cit.> as the systems in which “tasks are distributed among human and technological components, which mutually affect each other in contingent ways” (p.1.). Freelancing platforms act as “active mediators” <cit.> and have the potential to promote security in freelance software development. Verbeek <cit.> highlights technological artifacts as “active mediators” that “actively co-shape people's being in the world: their perception and actions, experience and existence”( p. 364). In recent years, we have seen skyrocketed rise in the business value of freelancing platforms [https://www.zdnet.com/finance/upwork-delivers-uneven-q3-but-touts-904-million-of-total-value-sourced-from-platform/] with a sharp increase in freelance workforce (UK alone has seen an increase of 46% from 2008 to 2017 <cit.> ). We postulate that freelancing platforms hold a pivotal position to influence behavior of clients and developers by offering (security) interventions and fulfill their social responsibility as active mediators. This is in line to the work of Gottenbarn <cit.> - according to Gottenbarn considering technological artifacts as ethically neutral is a misplaced belief and there can be detrimental consequences of missing the broader context in which the technologies sit in. Unfortunately, despite the pivotal position that freelancing platforms hold in freelance software development ecosystem, to the best of our knowledge we did not find any research in how freelancing platforms can facilitate and promote security culture in freelance software development environment. While developer centered security is an active research areas <cit.> with researchers and practitioners studying and facilitating security culture in software companies <cit.> and open-source communities <cit.> and also investigating security responsibilities in software companies <cit.>, there is a need to focus research efforts on understanding the nuances of security responsibility in online freelance software development and the role of freelancing platforms in promoting security responsible behavior. § A RESEARCH AGENDA FOR PROMOTING SECURE SOFTWARE DEVELOPMENT IN ONLINE FREELANCE ENVIRONMENT Our analysis of existing literature suggests the need for a holistic look at secure coding behavior of freelancers and understanding the complex the socio-technical context they work in. Recent studies identify the unique marketplace dynamics of freelance software developers and the the nuances of security perceptions held by them <cit.>. Furthermore, research identifies that common security practices for secure software developed are insufficient for under-resourced developers and highlights the need for tailored security interventions for them<cit.>. Going forward we outline our research agenda and organize our suggestions into four areas to investigate: §.§ Characterising software security and defining separation of responsibilities In order to encourage consistent understanding of secure software development and facilitate separation of responsibilities, we postulate characterising security to identify basic and advanced security with separation of responsibilities, We suggest conducting empirical studies with professional developers, security experts and freelance developers to understand what they think is basic security that should be done as part of development without explicit security requirements. The thematic analysis on how freelance developers define basic and advance security <cit.> can be a good starting point. We then suggest use of authoritative sources to characterise security and provide a draft of basic responsibilities of a developer. [roundcorner=10pt] Key Research Questions: * How do developers and security specialists define basic security responsibilities? * What do security experts consider part of secure software development? * How do security responsibilities vary with programming languages and development frameworks? * How can we provide separation of security responsibilities and get consensus on it? §.§ Building trust in online freelance development communities Clients and freelance developers work together to produce a secure software. However, mistrust between the two can result in security compromise. We encourage multidisciplinary research to investigate theories of trust from behavioral sciences and use them to build trust in software communities for security. Toth et al. <cit.> conducted a survey with 127 freelancers to explore the relationship between virtual community trust, work engagement. Work-engagement has a strong link to meaningful work <cit.> and is defined as : ““a positive, fulfilling work-related state of mind that is characterized by vigor, dedication and absorption” (p. 74, <cit.>) and person–job fit is described as a match between personal abilities and demands of the job <cit.>. The works suggests that trust in digital communities positively affects both the work-engagement and person-job fit. "Freelancing platform can improve work performance through person–job fit by assisting in the creation of trust among members of their platforms” (p.1, <cit.>). Recent study by Bianca et al. <cit.> investigates factors that influence sense of belong of developers to a virtual community. The sense of belonging to a community retain contributors and improve project sustainability. It is important to examine the factors that impact the sense of virtual community among freelance developers in online freelance marketplaces. Furthermore, these platforms should incorporate these factors to enhance project sustainability and ensure the retention of freelance developers. Furthermore, we advocate the use of rich resources developed by academia and industry on computing code of ethics, and security community to offer induction courses to freelance developers when onboarding online freelance platforms. While, these courses may not be mandatory but developers who attempt them should get rewarded via badges or higher rank in search algorithms. Rogerson suggests that “Codes of ethics and practice can be enormously powerful if used proactively.” <cit.>. Software Engineering Code of Ethics and Professional Practices <cit.> can be used as springboards to offer membership/ licensing by institutions <cit.>. Freelance developers should be made aware and encouraged to take membership of professional bodies that promote responsible behavior. They should “strive to become members of international associations or community of computing” (p.422, ,<cit.>) and display it on their profile to stand-out from the crowd. Moreover, the freelancing platforms should adjust their algorithms to highlight the profiles of developers who advocate responsible behavior and display such badges. [roundcorner=10pt] Key Research Questions: * How can we use theories of building trust in communities from behavioral sciences in online freelance communities? * How can we leverage the extensive body of work on ethics in software engineering and utilise it to encourage responsible behaviour among developers? * How can freelancing platforms be encouraged to update their algorithms to highlight security responsible behavior? §.§ Leveraging the potential of online freelancing platforms in the promotion of secure software development Freelancing platforms have the potential to influence freelancers behavior and security culture in freelancing communities in a number of ways. However, the challenge is how to onboard freelancing platforms on this and build a compelling business case for security to them. Moreover, onboarding freelancing platforms on building security culture in freelancing platforms also comes as a moral and social responsibility that they are accountable for. Online freelance marketplace hold a unique and pivotal position in today's digital landscape to educate and influence developers who are under-resourced and come from deprived economy. Moreover, by offering security interventions in proximity to developers via online freelance marketplaces, it is possible to enhance the skills and conduct of this group of developers. This approach can effectively address the growing demand for responsible developers in the present digital economy. Research identifies that online freelance platforms influence and control identities of freelancers <cit.>. There is an immediate need to highlight the power that online freelance marketplaces hold in digital economy and over the careers of freelance developers. However, with great power comes responsibility. These responsibilities should come forth and researchers and security industries need to construct a convincing proposal for security to these platforms to get them onboard on promoting secure and responsible behavior among freelance developers. Our empirical work <cit.> also suggest many freelance developers also work in companies and opt for freelancing as a part-time job, or switch often between the two. Existing studies also suggest that workers may combine employment statuses by having multiple jobs <cit.>. The empirical study with freelancers , reported by Shevchuk and Strebkov <cit.>, report how individuals' work values differ in their self-employment situations. The different value sets is also evident in developers wherein Rauf et al. <cit.> reported that a developer shifted on his value-set depending on whether he is working on a project for the company or for himself. These differences in developers’ value-sets as they switch their working hat is noteworthy in the realm of security in freelance community. We consider it crucial to investigate the impact of mentorship for security through freelancing platforms on the security mindset of developers who work with companies lacking a security culture. This research direction holds significant importance. Freelance platforms also have great potential to facilitate researchers in conducting empirical studies. Danilova et al. <cit.> suggest that use of online freelancing platforms provides ecological validity for online security developer studies. However, experience of researchers (e.g., <cit.>, <cit.> and <cit.> with freelance platforms suggest that freelance platforms do not encourage researchers to recruit freelancers directly for research studies. Rauf et al. <cit.> report recruiting relatively large number of freelance software developers for a research study through a non-friendly user interface requiring them to create separate jobs and contracts for each individual freelance developer (reported in <cit.>. While it made the job very lengthy and exhausting (considering hiring of at least 124 freelance developers <cit.>),the requirement of job also required that freelancing platforms deduct a considerable amount from the paid amount for each study participant which may not scale well for limited research funding. The members of freelance platforms are not allowed to take payment outside the platform as the correspondence between the client and a freelancers are often checked and then penalised if there is a conversation on payment through other means <cit.>. Moreover, the study <cit.> report rejection from some freelancing platforms who did not approve research study considering it unsuitable for their platform. [roundcorner=10pt] Key Research Questions: * How can we effectively advocate for freelancing platforms to promote security interventions in a compelling business case? * How can we highlight the case of moral/ social responsibility of freelancing platforms? * How does value transfer occur between the freelance development and company work? * Can mentorship in freelance environment propagate to company practices ? * Which recruiting strategy is effective in recruiting freelance software developers for research studies? * How can we create a business case for freelance platforms to promote and support research on freelance software development? §.§ Building adaptive security interventions for online freelance software development “Adaptive security interventions take the socio-technical context into account, and therefore respond to the different security needs of the developer ” (p.25, <cit.>). We postulate that distributed security responsibility, wherein all the involved parties are aware of their responsibilities and comply with them, can be best done with adaptive security interventions. These adaptive security interventions should facilitate freelancers and clients in developing secure software. Clients have their own set of requirements such as awareness of negative consequences of software vulnerabilities and guidelines on how to recruit right developer in a given domain and development framework. Similarly, freelancers also work under varying needs and socio-technical settings. Their intervention needs can range from gaining familiarity of different types of security interventions <cit.> depending on domain, programming language, development frameworks, and developers' socio technical environment such as working alone or in team, and support in making a business case for security for different types of clients. These interventions need to be designed in an cost and time effective manner to capture attention of freelancers who are often time poor. We also believe that security interventions should focus on positive responsibility <cit.>, i.e. focusing on what ought to be done and provide incentives to freelance developers rather than on blaming or punishing them for irresponsible behavior. [roundcorner=10pt] Key Research Questions: * How can we design security interventions to encourage security responsible behavior as positive responsibility in freelance development? * How can we design interventions that can help freelance developers make security a selling point for clients? * How can we design adaptive security interventions for different types of developers and clients in a cost and time effective manner? § CONCLUSION In this position paper, we advocate the need for organized and systematic effort by researchers to address security needs and challenges of online freelance marketplaces. Based on understanding of existing literature, rapid adoption of freelance work model and exponential growth in the revenue of online freelance marketplaces, we highlight the case of distributed security responsibility among different stakeholders of online freelance software development. The unique dynamics of online freelance marketplaces offers interesting challenges to advancing research in this domain, but it has the potential to bring forth existing security solutions to wider developer community and deliver substantial benefits to the broader security ecosystem. IEEEtran
http://arxiv.org/abs/2307.04558v1
20230710134704
On an uncertainty result by Donoho and Stark
[ "Oriol Baeza-Guasch" ]
math.FA
[ "math.FA", "math.CA" ]
#2#3 2.5cm2.5cm1cm1.5cm =2ex theoremTheorem *theorem*Theorem propositionProposition lemmaLemma corollaryCorollary conjectureConjecture claimClaim definition definitionDefinition exampleExample exerciseExercise remarkRemark equationsection psmallmatrix ([ ) On an uncertainty result by Donoho and Stark Oriol Baeza Guasch Universitat Politècnica de Catalunya 0.85 Abstract. In the work of Donoho and Stark <cit.>, they study a manifestation of the uncertainty principle in signal recovery. They conjecture that, for a function with support of bounded size T, the maximum concentration of its Fourier transform in the low frequencies [-W/2, W/2] is achieved when the support of the function is an interval. In <cit.>, they are able to prove a positive result under the extra assumption that WT≤ 0.8, using an inequality with symmetric rearrangements. In our work, we present a more elementary proof of their result, while also relaxing the required bound to WT ≤ 1. Finally, we also study a discrete version of the problem, by considering complex polynomials and their concentration on subsets of the unit circle, and we prove an analogous problem. Lastly, this result is used to improve an inequality by Montgomery, appearing in <cit.>. § INTRODUCTION To state the original conjecture by Donoho and Stark, we must introduce the following operators, defined for any f∈ L^2(ℝ). First, the time-limiting operator for a given measurable subset (P_ f)(t) = f(t) t∈, 0 otherwise and second the frequency-limiting operator for a given measurable subset (P_ f)(t) = ∫_ e^2π i w tf̂(w) dw where f̂ is the Fourier transform of f, with the convention f̂ (w) =∫_ℝ f(t) e^-2π i w t dt Then, their conjecture can be stated as follows. The supremum sup ||P_ P_||, where is an interval and ranges over measurable subsets with fixed measure, is attained when is also an interval. In <cit.>, they are able to prove a positive result under the extra assumption given by the bound WT ≤ 0.8, where this quantities refer to the sizes of the subsets: W = || and T = ||. However, we will rather work with a symmetric formulation of the statement, where we consider sup ||P_ P_ || instead. This will result more convenient for our work, given the interpretation that will be presented later, while being equivalent to the original conjecture. The norms ||P_ P_ || = ||P_ P_ || are equal. In particular, both formulations of the conjecture are equivalent. The proof is immediate after the observation that each of the operators P_ and P_ are self-adjoint, so the adjoint operator of P_ P_ is precisely P_ P_. As the norm of an operator in a Hilbert spaces is the same as its adjoint, the conclusion follows. Next, we will give an interpretation of this symmetric formulation that motivates the result that we will rather prove. For that, we introduce the concentration operator, for a measurable set ⊆ℝ and f∈ L^2(ℝ) we define c_ (f) = ∫_ |f(t)|^2 dt∫_ℝ |f(t)|^2 dt Now, because ||P_|| =1 it follows straightforwardly that ||P_ P_|| = sup_f∈ L^2 : f = P_ f||P_ f||/|| f|| = sup_f∈ L^2: f = P_ f∫_ |f(t)|^2 dt ∫_ℝ |f(t)|^2 dt = sup_f∈ L^2: f = P_ f c_(f) which we might interpret as calculating the concentration for a function f in the measurable set , and restricting our attention to functions whose Fourier transform have support in . Therefore, altogether the main result that we will prove in this work is Let W and T be real numbers such that WT ≤ 1. Then, for all measurable subsets of the real numbers with size || = T, and functions f whose Fourier transform has support f̂ = [-W/2,W/2], the following inequality is true ∫_ |f(t)|^2 dt ≤∫_-T/2^T/2| g(t) |^2 dt where g is the function given by the inverse Fourier transform of |f̂|. In particular, by denoting 𝕀 = [-T/2,T/2], it holds c_(f) ≤ c_𝕀 (g). Which by the previous reasoning improves the result by Donoho and Stark, by relaxing the bound required. On the other hand, the difference in interpretation for the original conjecture is that there the concentration is computed in the frequency domain (rather than the temporal domain ). Nonetheless, there is a similarity in the approaches for the proof: to modify the function f to obtain a function with higher concentration, but with support on a single interval of the same size. In particular, in their proof the improvement in concentration is given by |f|^* the symmetric decreasing rearrangement, which is defined as, μ_f ( α) = | { t : f(t) ≥α}| < ∞ f^*(1/2 μ_f(α)) = f^*(-1/2 μ_f(α)) = α That is, the symmetric function decreasing about the origin that has the same measure of its level sets, and which will be supported on the interval [-| f|/2, | f|/2]. The use of symmetric rearrangements is motivated in their proof because it allows to use the following lemma, by Hardy, Littlewood and Pólya. Let f,g and h be positive functions. Then, ∫_ℝ∫_ℝ f(x) g(y) h(x-y) dxdy≤∫_ℝ∫_ℝ f^*(x) g^*(y) h^*(x-y) dxdy Their additional restriction WT ≤ 0.8 arises here, as they require that | t|^* = t. On the other hand, our restriction will arise from imposing only |sin t| = sin t in the domain, resulting in a less restrictive bound of WT ≤ 1. Finally, also worth mentioning that when restricting to both and intervals, the concentration operator has been widely studied. In particular, the functions maximizing the concentration are called the Prolate Spheroidal Wave Functions, with their characterization described with detail in the work of Slepian, Pollak and Landau <cit.>, among others. § PREVIOUS LEMMAS We first prove a useful inequality which takes advantage of the concavity/convexity of the sine function, together with the known inequalities by Jensen and Karamata. Let n≥1 be an integer and L a fixed real number. Then, for all real numbers x_1, x_2, …, x_n such that x_1 + x_2 + … + x_n = L, the expression | sin(x_1) + sin(x_2) + … + sin(x_n) | achieves its maximum when #{y_1, y_2, …, y_n}≤ 2, where the y_i are real numbers in [0,2π) such that y_i ≡ x_i 2π. In other words, all x_i leave the same remainder modulo 2π except maybe one. The idea of the proof is taken from a similar result for concave-convex functions in (Cirtoaje, 2006) <cit.>. Let y_i ∈ [0, 2π) be such that y_i ≡ x_i 2π. It is clear that sin(x_1) + … + sin(x_n) = sin(y_1) + … + sin(y_n) Now, suppose (<ref>) achieves a maximum at point (y_1, …,y_n) and write without loss of generality y_1 ≤ y_2 ≤…≤ y_n. Suppose for the sake of contradiction that y_1 < y_n-1, and we distinguish two cases. Case 1. If y_n-1≤π, using that sin(y) is a concave function in [0,π], we have by Jensen's inequality sin(x_1) + sin(x_n-1) = sin(y_1) + sin(y_n-1) < 2 sin(y_1 + y_n-1/2) = sin(y_1 + y_n-1 - y_1/2) + sin(y_n-1 - y_n-1 - y_1/2) = sin(x_1 + y_n-1 - y_1/2) + sin(x_n-1 - y_n-1 - y_1/2) with the inequality being strict since the variables are different, contradiction. Case 2. If y_n-1 > π and y_n + y_n-1-π < 2π, using that sin(y) is a strictly convex function in (π, 2π), we have by Karamata's majorization inequality that sin(x_n-1) + sin(x_n) = sin(y_n-1) + sin(y_n) < sin(π) + sin(y_n + y_n-1 - π) = sin(y_n-1 - (y_n-1 - π)) + sin(y_n + (y_n-1 - π) ) = sin(x_n-1 - (y_n-1 - π)) + sin(x_n + (y_n-1 - π) ) with the inequality being strict since y_n-1 > π, contradiction. Case 3. If y_n-1 > π and y_n + y_n-1-π≥ 2π, using that sin(y) is a convex function in (π, 2π], we have by Karamata's majorization inequality that sin(x_n-1) + sin(x_n) = sin(y_n-1) + sin(y_n) < sin(y_n + y_n-1 - 2π) + sin(2π) = sin(y_n-1 + y_n) + sin(y_n - y_n) = sin(x_n-1 + y_n) + sin(x_n - y_n ) with the inequality being strict since y_n < 2π. Altogether, we conclude that y_1 = y_n-1. Now, we should study the maximum of -( sin(x_1) + … + sin(x_n) ) but using that sin is an odd function, we might just apply the previous argument to variables -x_i, and a similar conclusion is reached. In particular, the previous result serves to prove the main lemma required for the proof of the theorem. Let r≥ 1 be a positive integer and L a fixed real value. Then, for all real variables A_1,A_2, …, B_r with ∑_p=1^r B_p - A_p = L it holds that ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2≤ 4 sin^2 (L/2) To begin, define for simplicity h(A_1,A_2,…,B_r) ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 The key observation is that the value of the expression h in (<ref>) is independent of a shift of the variables [This observation arises more naturally during the proof of <ref>, for the case where the subset is a finite disjoint union of intervals. There, the A_p,B_p will be basically the endpoints of these intervals, so it is not surprising that the expression presented above depends only on the sizes of the intervals and their relative position and is therefore invariant by a shift.] h(A_1,A_2,…,B_r) = h(A_1+s,A_2+s,…,B_r+s) ∀ s ∈ℝ This can be easily seen after the following manipulation h(A_1,A_2,…,B_r) = ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 = ∑_p,q[ sin A_p sin A_q + sin B_p sin B_q; +cos A_p cos A_q + cos B_p cos B_q ] - 2∑_p,q[ sin B_p sin A_q; + cos B_p cos A_q ] = ∑_p,qcos(A_p-A_q) + cos(B_p-B_q) - 2cos(A_q-B_p) In particular, this shifting will be of importance since it will allow us to restrict our attention when looking for the maximum of h to a smaller subset of variables satisfying an additional constraint. Now, suppose that h achieves its maximum value under the given constraint ∑_p=1^r B_p - A_p = L at a point (A_1^*,A_2^*,…,B_r^*). Also, it is clear that for all points we have ∑_p=1^r cos B_p - cos A_p = - ( ∑_p=1^r cos (B_p+π) - cos (A_p+π) ) and since ∑_p=1^r cos (B_p+s) - cos (A_p+s) is a continuous function on s, there exists s^* ∈ [0,π) such that ∑_p=1^r cos (B_p^*+s^*) - cos (A_p^*+s^*) = 0 Therefore, for all points (A_1,A_2,…,B_r) where ∑_p=1^r B_p-A_p = L it holds 0 ≤ h(A_1,A_2,…,B_r) ≤ h(A_1^*, A_2^*, …, B_r^*) = h(A_1^*+s^*, A_2^*+s^*, …, B_r^*+s^*) = ( ∑_p=1^r sin (B_p^*+s^*) - sin (A_p^*+s^*) )^2 + 0 ≤max_{( ∑_p=1^r sin (B_p) + sin (-A_p) )^2 } where the set over which we are maximizing is = {-A_1,-A_2,…,B_r | ∑_p=1^r B_p -A_p = L } However, by <ref> it is immediate that the last expression in (<ref>) is maximized when all variables B_p and -A_q are equal modulo 2π except maybe one, say without loss of generality that it is B_r. Thus, we can write max_{(∑_p=1^r sin (B_p) + sin (-A_p) )^2 } = max_'{(∑_p=1^r sin (B_p) + sin (-A_p) )^2 } where the new set is ' = { -A_1,-A_2,…,B_r | [ ∑_p=1^r B_p -A_p = L; -A_1 ≡ -A_2 ≡…≡ B_r-12π ]}⊆ But even more, combining (<ref>) and (<ref>) we have max_ h(A_1,A_2,…,B_r) ≤max_'{(∑_p=1^r sin (B_p) + sin (-A_p) )^2 } ≤max_'{(∑_p=1^r sin B_p - sin A_p )^2 + (∑_p=1^r cos B_p - cos A_p )^2} = max_' h(A_1,A_2,…,B_r) where the inequality comes from adding a non-negative term to the expression. Now, using again that h is invariant by a shift of the variables, we can assume that A_1= 0. Then, taking into account 0 = -A_1 ≡ -A_2≡…≡ B_r-1 we have sin A_1 = sin A_2 = … = sin B_r-1 = 0, and also cos B_p = cos (-A_p) =cos A_p for all indices p<r. Therefore, continuing (<ref>) we deduce max_ h(A_1,A_2,…,B_r) ≤max_' ∩{A_1 = 0}{( sin B_r - sin A_r_=0)^2 + ( cos B_r - cos A_r )^2} =4 max_' ∩{A_1 = 0}{sin^2 ( B_r-A_r/2) } Finally, using again the equivalences 0 =-A_1 ≡ -A_2≡…≡ B_r-12π and the sum constraintwe have L = ∑_p= 1^r B_p-A_p = B_r-A_r - 2π k B_r-A_r/2 = π k + L/2 for some integer k, and hence max_'∩{A_1=0}{sin^2 ( B_r-A_r/2) } = sin^2 ( π k + L/2 ) =sin^2 ( L/2 ) Therefore, combining (<ref>) and (<ref>) the conclusion is immediate max_( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2≤ 4 sin^2 (L/2) Notice that equality can be achieved for example at point (A,A,…,A,A+L), where h evaluates to ( sin(A+ L) - sin A )^2 +( cos(A+ L) - cos A )^2 = 4sin^2 (L /2 ) In particular, these points of equality will correspond later, during the proof of the theorem, with the cases where is a single interval. § PROOF OF THE THEOREM Now, we are ready to introduce the proof of the theorem, which will be divided in two parts. Firstly, showing that the statement for the case where the support of the function is = ⊔_p=1^r (a_p,b_p) a disjoint union of intervals. Second, we will see that this implies the result in the case of a general measurable subset, by using the regularity of the Lebesgue measure. Let W and T be real numbers such that WT ≤ 1. Then, for a disjoint union of intervals = ⊔_p=1^r (a_p,b_p) with total size || = ∑_p=1^r b_p-a_p = T, and functions f whose Fourier transform has support f̂ = [-W/2,W/2], the following inequality is true ∫_ |f(t)|^2 dt ≤∫_-T/2^T/2| g(t) |^2 dt where g is the function given by the inverse Fourier transform of |f̂|. In particular, by denoting 𝕀 = [-T/2,T/2], it holds c_(f) ≤ c_𝕀 (g). Consider = ⊔_p=1^r (a_p,b_p) a finite disjoint union of intervals with total size T. Now, by a straight-forward computation, we have that for f with f̂ = [-W/2,W/2] it holds ∫_ |f|^2 dt = ∑_p=1^r ∫_a_p^b_p|f(t)|^2 dt = ∑_p=1^r ∫_a_p^b_p∫_-W/2^W/2∫_-W/2^W/2f̂(ω) f̂(η) e^2π i (η-ω) t dη dω dt = ∫_-W/2^W/2∫_-W/2^W/2f̂(ω) f̂(η)/η-ω∑_p=1^r 1/2π i( e^2π i (η-ω) b_p - e^2π i (η - ω) a_p) dη dω = 1/2π∫_-W/2^W/2∫_-W/2^W/2f̂(ω) f̂ (η)/|η - ω|∑_p=1^r [ [ sin(2π |η-ω| b_p) - sin(2π |η-ω| a_p) ]; -i [ cos(2π |η-ω| b_p) - cos(2π |η-ω| a_p) ] ] dη dω ≤1/2π∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)|/|η - ω|[ ( ∑_p=1^r sin B_p - sin A_p )^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2 dη dω where we used the triangular inequality and trigonometric identities. Also, we denote B_p = 2π |η-ω| b_p and A_p = 2π |η - ω| a_p to ease the notation, and we have the understanding that whenever η = ω, the terms evaluate to the fix value 1/|η - ω|[ ( ∑_p=1^r sin B_p - sin A_p )^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2 = = [ ( ∑_p=1^r 2π b_p - 2π a_p )^2 + 0 ]^1/2 = 2π T which only depends on the size of . Now, by the previous observation when η = ω, and using <ref> for the case when η≠ω, we have that under the constraint given by the total size of the intervals ∑_p=1^r B_p- A_p = 2π |η-ω|∑_p=1^r b_p-a_p = 2π |η-ω| T, it holds that 1/|η - ω| [ ( ∑_p=1^r sin B_p - sin A_p )^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2≤2/|η-ω||sin( π |η-ω| T)| So substituting in (<ref>) we have ∫_ |f|^2 dw ≤1/π∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)|/|η - ω||sin( π |η-ω| T )| dη dω Now, by hypothesis WT ≤ 1, so it holds that π |η - ω| T ≤π WT ≤π Thus, we can ignore the absolute value around sin, and we have ∫_ |f|^2 dt ≤1/π∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)|/|η - ω|sin( π |η-ω| T ) dη dω = ∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω) f̂(η)| ∫_-T/2^T/2 e^2π i (η-ω) t dt dη dω = ∫_-T/2^T/2∫_-W/2^W/2∫_-W/2^W/2 |f̂(ω)| e^ -2π i ω t|f̂(η)| e^-2π i η t dη dω dt = ∫_-T/2^T/2 |g|^2 dt where we define g to be the function given by the inverse Fourier transform of |f̂|, and f̂ is itself the Fourier transform of f. It is clear that g = f̂ = [-W/2, W/2], and also that ||g|| = ||f|| since by Plancherel ∫_ℝ |f|^2 dt = ∫_ℝ |f̂|^2 dw = ∫_-W/2^W/2 |f̂|^2 dw =∫_-W/2^W/2 |g|^2 dw = ∫_ℝ|g|^2 dw = ∫_ℝ |g|^2 dt so we get the desired conclusion and c_(f) ≤ c_𝕀 (g), where 𝕀=[-T/2,T/2]. Now, we extend this result to any measurable subset. Let W and T be real numbers such that WT ≤ 1. Then, for all measurable subsets of the real numbers with size || = T, and functions f whose Fourier transform has support f̂ = [-W/2,W/2],the following inequality is true ∫_ |f(t)|^2 dt ≤∫_-T/2^T/2| g(t) |^2 dt where g is the function given by the inverse Fourier transform of |f̂|. In particular, by denoting 𝕀 = [-T/2,T/2], it holds c_(f) ≤ c_𝕀 (g). Let be a measurable subset of the real numbers with measure || = T, and f a function whose Fourier transform has limited support f̂ = [-W/2,W/2] and for which the statement does not hold. Multiplying f by a scalar will not affect the concentration, so assume without loss of generality that f = 1. Therefore, we suppose, for the sake of contradiction, that c_ (f) = ∫_ |f(t)|^2 dt > ∫_𝕀 |g(t)|^2 dt = c_𝕀 (g) where g is the function given by the inverse Fourier transform of |f̂|. Next, it is well-known (see, for example, Theorem 3.4 in (Stein and Shakarchi, 2009) <cit.>) that for a measurable set with finite measure, for every ε > 0 there exist finitely many disjoint finite intervals 𝕁_1, …, 𝕁_r ⊆ℝ such that |Δ⋃_k=1^r 𝕁_k | < ε. Here, Δ refers to the symmetric difference of two sets, A Δ B (A ∖ B) ∪ (B ∖ A). Now, construct a sequence of measurable subsets which can be expressed as a finite disjoint union of finite intervals {_n }_n≥ 1, and such that |_n Δ| ≤1/n Then, if we define the interval 𝕀_n = (-| _n|/2, | _n|/2) we know by <ref> that ∫__n |f(t))|^2 dt ≤∫_𝕀_n |g(t)|^2 dt Also, it is clear that ∫__n |f(t)|^2 dt n⟶∫_ |f(t)|^2 dt ∫_𝕀_n |g(t)|^2 dt n⟶∫_𝕀 |g(t)|^2 dt Therefore, by Fatou's lemma and the inequality in (<ref>) we have ∫_ |f(t)|^2 dt ≤lim inf∫__n |f(t)|^2 dt ≤lim inf∫_𝕀_n |g(t)|^2 dt = ∫_𝕀 |g(t)|^2 dt This clearly contradicts our assumption (<ref>), as we wanted to show. Lastly, again since the norm of functions f and g is the same, we conclude c_(f) ≤ c_𝕀(g). § DISCRETE VERSION, IMPROVING MONTGOMERY'S RESULT Finally, we introduce a discrete version of the problem, which is solved using the same inequalities, and which also requires a similar additional bound. Let us consider polynomials of degree n≥ 1 and complex coefficients, that is P ∈ℂ_n[z]. Moreover, we will restrict our attention to measurable subsets of the unit circle 𝕋, which we will represent by their arguments as a complex number. To follow the notation of the previous work, for a measurable Ω⊆𝕋 and P ∈ℂ_n[z], let us denote c_Ω (P) = ∫_Ω |P(z)|^2 (z)∫_𝕋 |P(z)|^2 (z) Where (z) is the Lebesgue measure on the unit circle, normalized to 2π. In particular, the measure of a measurable subset Ω⊆𝕋, which we may write as Ω = {e^iθ : θ∈Θ}, is given by |Ω| = ∫_Ω(z) = ∫_Θ dθ Also, we will consider the norm of the polynomial P(z) = a_0 + a_1z + … + a_n z^n to be P = ∫_𝕋 |P(z)|^2 (z) = ∫_0^2π |P(e^iθ)|^2 dθ =2π( |a_0|^2 + |a_1|^2 + … + |a_n|^2 ) Then, the analogous to Conjecture 1 in this case is the following. Fix n≥ 1 an integer and δ > 0. Then, among all measurable subsets Ω of the complex unit circle with measure |Ω| = 2δ, the maximum of the concentration operator is attained on an interval 𝕀 of this same length. That is, sup_P ∈ℂ_ n[z] |Ω| = 2δ c_Ω (P) = sup_P ∈ℂ_n[z] c_𝕀 (P) And using an analogous approach, we will be able to prove a positive result once the additional hypothesis that nδ≤π is added, which will serve a similar purpose as WT≤ 1 required in the continuous version. In some sense, we can relate the size of the subset |Ω| = 2δ with T the size of the support in the time domain of f, and the degree n of P with W the size of the frequency domain of f̂. Lastly, notice that the position of the interval in the unit circle is not relevant, since for a given P(z) = a_0 + a_1 z + a_2 z^2 + … + a_n z^n and 𝕀 = (-δ,δ), we can take Q(z) = a_0 + a_1 e^-iθ z + a_2 e^-i 2θ z^2 + … + a_n e^-i n θ z^n and 𝕁 = (θ-δ, θ+δ) and it holds that c_𝕀 (P) = c_𝕁 (Q). Therefore, we will be considering 𝕀 = (-δ, δ). In particular, the positive result we prove is the following. Let Ω be a measurable subset of the complex unit circle, and let P(z) = a_0 + a_1z + … + a_n z^n be any polynomial of degree n≥ 1. Denote |Ω| = 2δ and suppose it holds that n δ≤π. Then, taking the interval 𝕀 = (-δ,δ) and the polynomial Q(z) = |a_0| + |a_1|z + … + |a_n|z^n, the following inequality is true ∫_Ω |P(z)|^2 (z) ≤∫_𝕀 |Q(z)|^2 (z) In particular, c_Ω(P) ≤ c_𝕀 (Q). The proof is analogous to the original continuous problem. Again, we must work out first the case where the subset is a finite disjoint union of intervals, and later extend it to any measurable subset of the unit circle using the regularity of the Lebesgue measure. Therefore, we will only include here the relevant details of the first part. First, by a straight-forward computation we have ∫_Ω |P(z)|^2 (z) = ∫__p=1^r (α_p, β_p) |P(e^iθ)|^2 dθ = 2 ∑_l,m = 0^n a_l a_m ∑_p=1^r e^i (l-m) β_p+α_p/2 sin( (l-m) β_p - α_p/2)/l-m = 2 ∑_l,m = 0^n a_l a_m/l-m ∑_p=1^r sin( (l-m) β_p - α_p/2) ·[ cos( (l-m) β_p+α_p/2) + i sin( (l-m) β_p+α_p/2) ] = ∑_l,m = 0^n a_l a_m/|l-m| [ ( ∑_p=1^r sin( |l-m| β_p) - sin( |l-m| α_p )) - i ( ∑_p=1^r cos( |l-m| β_p) - cos( |l-m| α_p ) ) ] ≤∑_l,m = 0^n |a_l| |a_m|/|l-m| [ ( ∑_p=1^r sin B_p - sin A_p)^2 + ( ∑_p=1^r cos B_p - cos A_p )^2 ]^1/2 Now, by <ref> and using the fact that 0 ≤ |l-m|δ≤ n δ≤π by hypothesis, it holds ∫_Ω |P(z)|^2 (z) ≤ 2 ∑_l,m = 0^n |a_l| |a_m|/|l-m|| sin( |l-m| δ) | = 2 ∑_l,m = 0^n |a_l| |a_m| /|l-m| sin( |l-m| δ) = ∫_𝕀 |Q(z)|^2 (z) where 𝕀 = (-δ, δ) is a single interval of size 2δ = |Ω|, and Q(z) = |a_0| + |a_1|z + … + |a_n|z^n is the polynomial whose coefficients are the norms of the coefficients of P. Notice that the norm of the new polynomial is the same as the norm of the original one, ∫_0^2π |P(e^iθ)|^2 dθ = 2π( |a_0|^2 + |a_1|^2 + … + |a_n|^2 ) = ∫_0^2π |Q(e^iθ)|^2 dθ and therefore c_Ω(P) ≤ c_𝕀(Q). Now, we will use this to also improve an inequality result by Montgomery, when adding the hypothesis nδ≤π. The result in question appears in <cit.>, where he presents a similar inequality to what we have obtained but with an extra factor, and which only applies (in our context) to symmetric polynomials of even degree. Our improvement is then reducing this factor from 20 to 1, which is actually the best possible, and extending it to any polynomial when the condition nδ≤π is added. We might directly state his result in our context, by rather taking 𝕋 = ℝ/ 2πℤ and the functions φ_k = cos k x. Nonetheless, it should be mentioned that Montgomery's result applies in a more general setup for sets of functions {φ_k} which are uniformly bounded and satisfy a Bessels' type inequality. [Thm. 1, <cit.>] Let f(x) = ∑_k=0^∞ a_k cos k x, and define f^**(x) = ∑_k=0^∞ a_k^* cos k x where the a_k^* are the numbers |a_k|, permuted so that {a_k^*}_k=0^∞ is a decreasing sequence. Then for any measurable set Ω⊆𝕋, with measure |Ω| = 2δ we have ∫_Ω |f|^2 ≤ 20 ∫_-δ^δ |f^**|^2 Where this f^** rearrangement can be understood as a discrete version of the symmetric rearrangement f^* presented in the introduction. Notice, however, that for f(x) = ∑_k=0^n a_k cos k x it holds that f(x) = a_0 + ∑_k=-n k≠ 0^na_|k|/2 e^i k x = e^-i n x( a_0 e^i n x + ∑_k=0 k≠ n^2na_|k-n|/2 e^i k x) so it is natural to consider the polynomial P(z) = 1/2(a_n + a_n-1 z^1 + … + a_1 z^n-1 + 2a_0 z^n + a_1 z^n+1 + … + a_n-1z^2n-1 + a_n z^2n) and we have that |f(x)|^2 = | a_0 e^i n x + ∑_k=0 k≠ n^2na_|k-n|/2 e^i k x|^2 = |P(e^ix)|^2 Therefore, Montgomery's result states in our context the following. Let P(z) = b_n + b_n-1 z + … + 2b_0 z^n + … + b_n-1 z^2n-1 + b_n z^2n be a symmetric polynomial of even degree 2n ≥ 2, and define P^*(z) = b_n^* + b_n-1^* z + … + 2b_0^* z^n + … + b_n-1^* z^2n-1 + b_n^* z^2n where the b_k^* are the numbers |b_k|, permuted so that {b_k^*}_k=0^n is a decreasing sequence. Then for any measurable set Ω⊆𝕋, with measure |Ω| = 2δ we have ∫_Ω |P(z)|^2 (z) ≤ 20 ∫_-δ^δ |P^*(z)|^2 (z) To the best of our knowledge, it is not known whether the result still holds when reducing the factor 20 to 1 without having any additional hypothesis. For example, this has been proven to be true when the integral is over the whole unit circle, in (Gabriel, 1932) <cit.>. [Thm. 4, <cit.>] Given an integer k≥ 1, and the functions A(θ) = ∑_r=-R^R a_r e^i rθ, A^*(θ) = ∑_r=-R^R a_r^+ e^i rθ where the a_r^+ are the numbers |a_r| ordered such that a_0^+ ≥ a_-1^+ ≥ a_1^+ ≥ a_-2^+ ≥ a_2^+ ≥…, then ∫_0^2π |A(θ)|^2kdθ≤∫_0^2π |A^*(θ)|^2kdθ It should be mentioned that this is actually an improvement on the same result appearing in (Hardy and Littlewood, 1948)  <cit.>, but which had some additional symmetry hypothesis on the coefficients. Back to our work, the statement that we will prove is then Let P be any polynomial of degree n≥ 1. Let Ω⊆𝕋 be a measurable set, with measure | Ω| = 2δ. Suppose it holds that n δ≤π, then ∫_Ω |P(z)|^2 (z) ≤∫_-δ^δ |P^*(z)|^2 (z) In particular, since they have the same norm, we have c_Ω (P) ≤ c_𝕀 (P^*) where 𝕀 = (-δ, δ). Here, the rearrangement of the coefficients in P^* will be described in the conclusion of the next <ref>. For example, the rearrangement can be taken as a_⌈ n/2 ⌉^* ≥ a_⌊ n/2 ⌋^* ≥ a_⌈ n/2 ⌉ + 1^* ≥ a_⌊ n/2 ⌋ -1^* ≥… n a_n/2^* ≥ a_n/2 +1^* ≥ a_n/2 -1^* ≥… n even Informally: the largest coefficient is the central one, then the one to the right of the central one, then the one to the left of the central one, then the second to the right of the central one... and so on. As for the proof of the theorem, we have already done most of the work in <ref>, and we only have left proving that rearranging the (real positive) coefficients of a polynomial increases the integral of its norm squared over the symmetric interval. The main lemma, which will give the explicit rearrangement needed in our case, is the following[It should be noted that this lemma is the discrete version of <ref>, which is rather the natural evolution for continuous functions. Therefore, the final proof for our improvement on the result by Montgomery uses a similar step as the proof given by Donoho and Stark in their continuous result. ]. Consider the form S(x,y) = ∑_l=0^n ∑_m=0^n s_l-m x_l y_m Suppose that the coefficients being given satisfy s_0≥ s_1 ≥…≥ 0 and also s_ν=s_-ν, and the variables satisfy x_l≥ 0, y_m ≥ 0, being given in every respect except arrangement. Then among the arrangements for which S assumes its maximum value there is one in which * x_μ≤ x_μ' if |μ'-n/2| < |μ-n/2| * no two of x_μ-x_μ', where μ < μ', |μ'-n/2| = |μ-n/2|, have different signs. and analogous conditions for variables y_ν. We are now ready to give the proof of our result. As mentioned, we only have left to prove that for P(z) = a_0 + a_1z + … + a_n z^n with positive real coefficients, it holds that ∫_-δ^δ |P(z)|^2 (z) ≤∫_-δ^δ |P^*(z)|^2 (z) where P^*(z) = a_0^* + a_1^* z + … + a_n^* z^n is the polynomial with coefficients rearranged as stated for variables x_l,y_m in <ref>. Now, by a simple computation ∫_-δ^δ |P(z)|^2 (z) = 2 ∑_l,m = 0^n a_l a_m/l-msin( (l-m) δ) We might now take x_l = y_l = a_l the (positive) coefficients of the polynomial, and the variables s_ν = 2 sin(νδ) ν with the understanding that s_0 = 2δ. The symmetry of the variables s_ν is clear, and the hypothesis nδ≤π guarantees that they are positive and decreasing, so that we might apply <ref>. Hence, we deduce that ∫_-δ^δ |P(z)|^2 (z) ≤ 2∑_l,m = 0^n a_l^* a_m^*/l-msin( (l-m) δ) = ∫_-δ^δ |P^*(z)|^2 (z) Combining this inequality with <ref> completes the proof. plain get arXiv to do 4 passes: Label(s) may have changed. Rerun ]
http://arxiv.org/abs/2307.05893v1
20230712034826
Deep Unrolling for Nonconvex Robust Principal Component Analysis
[ "Elizabeth Z. C. Tan", "Caroline Chaux", "Emmanuel Soubies", "Vincent Y. F. Tan" ]
eess.SP
[ "eess.SP", "cs.LG" ]
Deep learning-based estimation of whole-body kinematics from multi-view images [ Received: date / Accepted: date ============================================================================== We design algorithms for Robust Principal Component Analysis (RPCA) which consists in decomposing a matrix into the sum of a low rank matrix and a sparse matrix. We propose a deep unrolled algorithm based on an accelerated alternating projection algorithm which aims to solve RPCA in its nonconvex form. The proposed procedure combines benefits of deep neural networks and the interpretability of the original algorithm and it automatically learns hyperparameters. We demonstrate the unrolled algorithm's effectiveness on synthetic datasets and also on a face modeling problem, where it leads to both better numerical and visual performances. RPCA, Sparsity, low-rank, unrolled algorithm, hyperparameters. § INTRODUCTION Robust Principal Component Analysis (RPCA) is the task of recovering a low rank matrix Ł^⋆∈^d_1 × d_2 and a sparse matrix ^⋆∈^d_1 × d_2 from their linear combination <cit.> ^⋆ = Ł^⋆ + ^⋆. Finding an exact solution to the RPCA problem is challenging due to its combinatorial nature. Yet, RPCA has received considerable attention due to its importance in many fields. These include applications from latent semantic indexing <cit.> to image processing <cit.>, to learning graphical models with latent variables <cit.>, and to collaborative filtering <cit.>. *The art of conventional RPCA: Some authors <cit.>, <cit.> considered a convex relaxation of RPCA, where the low rank matrix is obtained throughout the minimization of the nuclear norm and the sparse matrix via an ℓ_1-norm penalization. Such optimization problems can be solved by proximal gradient methods. However, such approaches are computationally expensive due to the proximal mapping of the nuclear norm, which involves a full singular value decomposition (SVD) of a d_1 × d_2 matrix, amounting to at least 𝒪(d_1 d_2 min(d_1, d_2)) flops per iteration. In contrast, alternating algorithms have been proposed to solve the original nonconvex formulation of RPCA involving the ℓ_0 pseudo-norm and the rank function (Section <ref>). These include the alternating projections (AltProj) method <cit.>, its accelerated version (AccAltProj) <cit.>, and a block-based method based on the CUR decomposition <cit.>. Although faster and more closely related to model (<ref>) compared to methods based on convex relaxations, their performance heavily rely on good initializations. *Learning-based strategies in RPCA: Deep neural networks (DNNs) have experienced a surge in popularity over the past decades, often attaining groundbreaking performance in various applications. In signal processing, incorporating deep learning approaches has become prominent because of their ability to automatically learn salient information from of real world data. However, DNNs are known to suffer from two shortcomings. Firstly, their black-box nature (i.e., the lack of interpretability) hinders our understanding of why certain predictions are derived, which is crucial in detecting limitations. Secondly, they are susceptible to overfitting to the training data since they often have a large number of parameters compared to the amount of available training data. To overcome these limitations, a technique known as deep unrolling (also known as deep unfolding) has been extensively explored <cit.> and has emerged as a promising approach in various signal processing problems. While the model parameters are fixed in the classical algorithms, the unrolled network replaces them with learnable parameters that can be optimised through end-to-end training using backpropagation. Therefore, a trained unrolled network can be viewed as a parameter-optimised algorithm, sharing both the benefits of conventional DNNs and interpretability of the original algorithm. Furthermore, as classical algorithms often have significantly fewer parameters than DNNs, unrolled networks can potentially mitigate the overfitting problem when there is insufficient data or when the training dataset is of low quality. Existing unrolling strategies in the context of RPCA are currently limited to algorithms based on convex relaxations. These include CORONA <cit.>, refRPCA <cit.>, and other similar works <cit.>, <cit.>, <cit.>, <cit.>. However, they inherit from the previously mentioned drawbacks of such convex relaxations. To the best of our knowledge, there does not exist unrolled versions of the alternating projections algorithm in RPCA, despite that being the state-of-the-art. Indeed, such an unrolled algorithm be beneficial in terms of having appealing computational properties and the closeness to model in (<ref>) of nonconvex RPCA approaches, while mitigating existing shortcomings (sensitivity to the initialization and incognizance of hyperparameters). Contributions: We propose an unrolled version of the Accelerated Alternating Projections algorithm <cit.>. The proposed procedure also incorporates the Minimax Concave Penalty (MCP), an alternative to hard thresholding owning numerous interesting properties and more suitable than the ℓ_1-norm relaxation <cit.>. The overall proposed procedure performs excellently on benchmark synthetic datasets and real-world (face) datasets, exceeding the performances on the state-of-the-art (unrolled) approaches. Outline: The paper is organised as follows. Preliminaries on RPCA algorithms and deep unrolling are presented in Section <ref>. The proposed method is then described in Section <ref> and numerical experiments are conducted in Section <ref>. Finally, concluding remarks are presented in Section <ref>. § PRELIMARIES §.§ Algorithms for RPCA RPCA may be formulated as the following non-convex optimization problem _Ł, ∈^d_1 × d_2^⋆ - Ł - _F, subject to (Ł) ≤ r and _0 ≤ |Ω|, where ·_F is the Frobenius norm, r ≥ (Ł^⋆) upper bounds the rank of the low rank matrix Ł^⋆, and k ≥ |Ω| upper bounds the cardinally of the support of the sparse matrix ^⋆. Netrapalli et al. <cit.> proposed to solve (<ref>) using the alternating projections (AltProj) method, which projects ^⋆ - _k onto the space of low rank matrices and ^⋆ - Ł_k onto the set of sparse matrices in an alternating manner at each iteration k. It enjoys a computational complexity of 𝒪(d_1 d_2 r^2) per iteration. Building upon AltProj, Cai et al. <cit.> proposed an accelerated version known as AccAltProj with an improved complexity 𝒪(d_1 d_2 r). Later, Cai et al. <cit.> introduced the Iterated Robust CUR (IRCUR), which is a variant of AltProj with a per-iteration complexity of 𝒪(r^2 n log n ), where n=max(d_1, d_2). This is achieved by operating on submatrices, hence avoiding expensive computations on full matrices. However, it is widely acknowledged that CUR-based decompositions are less accurate than SVD-based ones. We briefly describe AccAltProj in Alg. <ref>, before we move on to the proposed unrolled model. Here, ℳ_r denotes the set of rank-r matrices, and T_k denotes the tangent space of ℳ_r at Ł_k. The i-th largest singular value of a matrix 𝐗 is denoted as σ_i^(𝐗). The operator _̋r represents the truncated SVD operation at rank r and _ζ represents the hard-thresholding operator (i.e., the proximity operator of ℓ_0) with threshold ζ. AccAltProj differs from AltProj by performing a tangent space projection on T_k rather than directly projecting ^⋆ - _k onto ℳ_k. This is followed by projecting the intermediate matrix onto ℳ_r to obtain Ł_k+1 before projecting ^⋆ - Ł_k+1 back onto the set of sparse matrices. Cai et al. <cit.> derived the projection operator onto T_k as: P_T_k(𝐀) = [ _k _1 ][ _k^⊤𝐀_k _2^⊤; _1 0 ][ _k^⊤; _2^⊤ ], where _k, _k contain the singular vectors from the truncated SVD of Ł_k = _k Σ_k _k^⊤, and (_1,_1) and (_2,_2) are the factors from the QR decompositions of (𝐈 - _k_k^⊤)(^⋆ - _k)^⊤_k and (𝐈 - _k_k^⊤)(^⋆ - _k)_k respectively. §.§ Deep Unrolling A tedious task in implementing iterative optimization algorithms is to tune their hyperparameters (e.g., stepsize, regularisation parameters). To circumvent this problem, unrolled versions of standard algorithms <cit.> have recently been developed. In essence, algorithm unrolling or unfolding consists in converting an iterative algorithm into a neural network. One iteration of the iterative algorithm is being transformed to one layer of the neural network. The benefits of this approach include neural network interpretability and automatic parameter learning. Following this line of thought, we propose to unroll the Accelerated Alternating Projections algorithm (Alg. <ref>). § PROPOSED UNROLLED ACCALTPROJ We adopt AccAltProj as our baseline model to unroll as it is fast compared to most existing algorithms and is more robust compared to IRCUR. We follow the idea from the Learned Iterative Soft Thresholding Algorithm <cit.> to design a non-linear feed-forward architecture with a fixed number of layers. As β and γ are fixed heuristically in AccAltProj, we chose to learn them in the unrolled network. The parameter β controls the variance of matrix elements of recovered Ł̂ while γ controls the rate of convergence <cit.>. They also play a key role for the theoretical guarantee of AccAltProj. More precisely, if properly chosen, the initial guesses 𝐒_-1 and 𝐋_0 generated at Lines 1 to 5 of Alg. <ref> fulfill the required condition for local convergence of AccAltProj <cit.>. Learning β and γ automatically allows our model to be customisable to use cases where datasets share similar properties for the underlying low-rank and sparse components. §.§ Using the Minimax Concave Penalty (MCP) instead of the ℓ_0 or ℓ_1 norms One challenge when developing an unrolled version of AccAltProj is that we are unable to directly use hard-thresholding for the non-linear activation. This is because it is not subdifferentiable, a property needed to deploy gradient-based optimizers to learn the parameters β and γ <cit.>. LRPCA <cit.> tackled this problem by replacing the hard-thresholding operator with the soft-thresholding operator in their unrolled model. However, the soft-thresholding operator is the proximal mapping of the ℓ_1 norm while the hard-thresholding operator is the proximal mapping of the ℓ_0 pseudo-norm. As such, vanilla soft-thresholding is not suitable for our objective in (<ref>). Taking the best of both worlds, we consider in this work the Minimax Concave Penalty (MCP) <cit.>, defined as MCP(x; ζ, υ) = υζ^2/2, if |x| > υζ ζ|x|-x^2/2υ , if |x| ≤υζ, where ζ is the threshold, and υ >1 is a parameter controlling the concavity of the penalty. It has a close relationship to the ℓ_0 pseudo-norm, from both the statistical <cit.> and optimization viewpoints <cit.>, while being subdifferentiable. Its proximal mapping 𝒫(x; ζ, υ) := prox_MCP(x; ζ, υ) is (x; ζ, υ) = sign(x)min{υmax(|x| - ζ, 0)/υ - 1 , |x| } which corresponds to the “firm thresholding operator” <cit.>, a compromise between soft- and hard- thresholding. In the unrolled version of Alg. <ref>, we use (·; ζ, υ) in place of the hard thresholding operator _ζ. The penalty functions and their corresponding proximal mappings are shown in Fig. <ref>. §.§ Unrolled AccAltProj We consider an unrolled version of the Modified Accelerated Alternating Projections and refer to it as the unrolled RPCA algorithm. Each iteration is thus transformed in one layer as shown in Fig. <ref>. We use this neural network to learn β and γ while keeping υ fixed (υ = 1.05). §.§ Training Criteria While it is possible to use adaptive parameters (i.e., separate β_k, γ_k for each layer k), we choose to learn only a single (β,γ) that is shared across the layers. In the unrolled model, we initialise β=1/2 ·√(d_1 × d_2) and γ=0.7 since they are the default values used in AccAltProj <cit.>. Consider a set of input data {_train^q}_q=1^Q and the associated sparse and low-rank decomposition that we denote by {(Ł_train^q, _train^q)}_q=1^Q. These can be obtained either via simulations or through the application of classical RPCA iterative algorithms on _train^q. Then, following <cit.>, we learn the two parameters γ and β via (γ̂,β̂) ∈_(γ,β) ∈^2 ∑_q=1^Q ℒ(Ł_train^q,Ł^q) + ℒ(_train^q,^q) subject to (Ł^q, ^q) = 𝒩(γ,β; _train^q) where 𝒩 is defined by cascading layers as in Fig. <ref> (unrolled network). Finally, we set the loss ℒ to be the relative error ℒ(𝐗̃,𝐗) = 𝐗̃ - 𝐗_F^2/𝐗_F^2. § NUMERICAL EXPERIMENTS To illustrate the effectiveness of the proposed approach, we performed experiments on two settings: a fully controlled one through synthetic simulations and a realistic one in the context of face modelling. The code to reproduce the simulations will be released. §.§ Simulated/Sythetic Data Problem setup: The synthetic data are generated as in <cit.>, i.e., let Ł^⋆ = 𝐔𝐕^⊤, where ,∈^d × r contain elements generated i.i.d. from the standard normal distribution. Similarly, the components of ^⋆ are sampled i.i.d. and uniformly from the interval [-c ·𝔼(| [Ł^⋆]_ij |), c ·𝔼(| [Ł^⋆]_ij |)] where c>0. The positions of the non-zero elements are randomly sampled without replacement. In the following, the matrix ^⋆ is said to be α-sparse if each of its rows and columns contain at most α d non-zero elements. Finally, given a generated pair (Ł^⋆,^⋆), we generate an input-target training data matrix as _train = ^⋆ = Ł^⋆ + ^⋆ and (Ł_train,_train) is obtained via IRCUR applied on _train. For this experiment, we fix the dimensions to d_1=d_2=d = 250 and the rank to r=2. We consider several simulated data sets generated by varying the sparsity level (controlled by α) and the amplitude (controlled by c) of the sparse component ^⋆. More precisely, we consider the following cases to assess the performance of our unrolled network: For each case, we generate a total of 300 samples, and split them into 180 training samples and 120 test samples. The unrolled network is trained for a total of 8 epochs. The metrics that we use to quantify the performances of the unrolled model and its competitors are as follows: ϵ_L(Ł_out) := Ł^⋆ - Ł_out_F, ϵ_S(_out) := ^⋆ - _out_F, ϵ_M(Ł_out, _out) := ^⋆ - Ł_out - _out_F / ^⋆_F, ϵ_supp(_out) := 1/d^2 (1_{ [^⋆]_ij = 0, [_out]_ij≠ 0} + 1_{ [^⋆]_ij≠ 0, [_out]_ij = 0}), where Ł_out, _out are placeholders for the outputs that could be computed from IRCUR, AccAltProj, or the unrolled model (after training). These four errors respectively quantify the accuracies on 1) the estimation of Ł^⋆, 2) the estimation of ^⋆, 3) the overall matrix ^⋆ and 4) support recovery of ^⋆. Results: In Fig. <ref>, we report the four errors described in the four cases. We compare the performance of the proposed approach with IRCUR <cit.> and AccAltProj <cit.> (which are not unrolled algorithms). We observe from Fig. <ref> that the proposed unrolled algorithm improves over its classical counterpart, which means that the hyperparameters are learned well. The lowest error in is always achieved by the IRCUR method and remains small (order 10^-8 versus 10^-7) for the other methods. This can be explained by the fact that the hyperparameters are learnt so as to minimize the error on Ł and . This is confirmed by the results obtained individually on matrices Ł and for which the smallest error is always obtained by the proposed unrolled method. Finally, as expected, the unrolled algorithm using the ℓ_1-norm instead of the MCP does not perform well. From Table <ref>, we observe that the learned γ's are similar across different settings, where they are all slightly larger than their initialised value of 0.7. This suggests that the default value of γ = 0.7 suggested in <cit.> is a fairly good estimate. The slight increase may be because AccAltProj implements an early stopping criterion, where stops once the error 𝐌^⋆ - 𝐋_k - 𝐒_k_F/𝐌^⋆_F at iteration k < 50 is below the tolerance of 10^-6. As such, for a larger fixed number of layers, a larger γ would be needed so that the network converges more slowly to the same point. Conversely, the learned γ would be smaller if we reduced the number of layers for the unrolled network. This means that learned γ is optimised for the given fixed number of layers in the network. The learned β exhibits much more variation across the different cases. In particular, they increased by about 2 times from before training in Cases 1 and 2, and about 1.5 times in Cases 3 and 4. This observation is in line with the interpretation of β in <cit.>, which stated that a higher value of β results in 𝐋̂ that is more “spiky" and 𝐒̂ that is more heavily diffused. We can take Case 1 as the baseline for the other cases to compare against. In Case 2 where α is greater than in Case 1, there would be more non-zero values in 𝐒^⋆, making it more diffused. In contrast, with smaller α in Case 3, the few non-zero elements of 𝐒^⋆ become more prominent against the backdrop of the other zero-valued elements, making it less diffused. In Case 4 where the magnitude of non-zero elements in 𝐒^⋆ is 10 times of that in Case 1, the non-zero values are more pronounced and hence 𝐒^⋆ is less diffused. As the learned β matches our expectation from theory for each case, this demonstrates that our unrolled model is indeed able to automatically fine-tune the parameter β to the different settings, which is an advantage over the classical AccAltProj. §.§ Face Dataset Problem setup: We now test the proposed unrolled model on the Yale Face Database <cit.> for the application of face modeling. The Yale database consists of 11 grayscale facial images each for a total of 15 subjects. The 11 images, each having a dimension of 243 × 320, show the same individual with different facial expressions, lighting conditions or accessories such as spectacles on the face. The task is of face modeling is to recover the occlusion-free image for facial recognition <cit.>. We vectorise the images of each subject, which are then stacked to form a 77760 × 11 matrix ^⋆. The static occlusion-free image of the subject forms the low-rank component of the matrix while the varied facial expressions, shadows, and objects covering the face form the low-rank component. Since all 11 images share one common underlying occlusion-free facial image, we assume that rank(^⋆) = 1. Subjects 1 to 7 and Subjects 8 to 15 are used for training and testing respectively. Similar to the experiment on synthetic datasets, we use IRCUR to obtain initial estimates and train the unrolled network for 8 epochs. Results: Visual results are displayed in Figs. <ref> and <ref> for the low rank and sparse parts respectively. The methods enable the separation of the original images into expressionless faces and expression details. While the images learned by IRCUR are poor, the proposed unrolled strategy adaptively learned hyperparameters (γ,β) that result in sharper edges. Tests were performed on an Intel(R) Core(TM) i7-1185G7 @3.00GHz, with 32Go RAM. The total (7 subjects) training time is 300s. Testing times (per subject on average) are as follows: IRCUR: 8s, AccAltProj: 0.75s and unrolled procedure: 4.375s, showing that the unrolled procedure has a good accuracy-computation time tradeoff. § CONCLUSION We proposed an unrolled algorithm to solve the RPCA problem in its nonconvex form. This results in an unrolled version of the AccAltProj algorithm but incorporates the Minimax Concave Penalty. The underlying learning strategy, which has the advantage of learning hyperparameters γ and β automatically, allows us to improve the state-of-the-art performances on benchmark synthetic datasets used in existing works as well as on real-world face datasets. In future work, we plan to improve on the training criterion in Section <ref> as well as the automatic learning of more parameters such as the ones that parametrize the MCP, i.e., ζ and υ in (<ref>). IEEEbib